Next Article in Journal
Corner Convergence Effect of Enclosed Blast Shock Wave and High-Pressure Range
Next Article in Special Issue
A Parallel Multimodal Integration Framework and Application for Cake Shopping
Previous Article in Journal
Research on Data Fusion of Positioning System with a Fault Detection Mechanism for Autonomous Vehicles
Previous Article in Special Issue
Visual Signifier for Large Multi-Touch Display to Support Interaction in a Virtual Museum Interface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Methodology to Evaluate User Experience for People with Autism Spectrum Disorder

1
Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Valparaíso 2340000, Chile
2
Instituto Universitario Centro de Investigación Operativa (CIO), Universidad Miguel Hernández de Elche, Avenida de la Universidad s/n, 03202 Elche, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(22), 11340; https://doi.org/10.3390/app122211340
Submission received: 1 October 2022 / Revised: 25 October 2022 / Accepted: 4 November 2022 / Published: 8 November 2022
(This article belongs to the Special Issue New Insights into Human-Computer Interaction)

Abstract

:
People with Autism Spectrum Disorder (ASD) have an affinity for technology, which is why multiple studies have implemented different technological proposals focused on the development of skills in people with ASD. Studies have evaluated the user experience (UX) and/or usability of their technological proposals through different evaluation methods, so they can be friendly and usable for users with ASD. However, the evaluation methods and instruments used do not consider the specific characteristics and needs of people with ASD, and furthermore, details are lacking in their implementations. To formalize the UX evaluation process, we propose a three-stage methodology to evaluate the UX in systems, products and services used by adults with ASD. The methodology considers in its processes, evaluation methods and instruments the characteristics of people with ASD so that, through the UX evaluation, the satisfaction and perception of these users about the system, product or service evaluated is improved. This proposal has been validated through the opinions of experts with knowledge in UX/Usability and ASD in two instances, which have contributed to specify, restructure, and improve the methodology.

1. Introduction

Autism Spectrum Disorder (ASD) is a condition characterized by deficits in social communication and social interaction, as well as restricted repetitive patterns of behavior, interests, and activities [1]. Some characteristics that people with ASD can present are: a tendency towards visual and structured thinking [2], delay of fine motor skills development [3], difficulties when generalizing skills to real-world contexts [4], susceptibility to experiencing depression and frustration [5], exhibit of hyper- or hypo-reactivity to sensory input or an unusual interest in sensory aspects of the environment [1], and that they prefer to use technology as it provides a safe and trustworthy environment [6].
It is important that the process to evaluate the user experience (UX), as well as the evaluation methods and instruments, are selected and adapted considering the specific characteristics of users with ASD, to ensure a positive and rewarding experience when interacting with systems, products or services.
Multiple studies have looked on how to evaluate UX in systems, products, or services used by people with ASD, however most of these studies do not present sufficient details of the evaluations performed and show a lack of empirical evidence [6]. Investigations have proposed different ways to evaluate UX and usability in their proposals, through different evaluation methods, which are defined as “a procedure composed of a series of well-defined activities for the collection of data related to the interaction of the end user (…)” [7], such as the system usability scale (SUS) [8] and heuristic evaluation [9], and instruments, which are a set of elements used by an evaluation method that can vary depending on the application’s context. Many of the evaluation methods and instruments used in the evaluations are not adapted to the characteristics and needs of people with ASD.
Considering this, the objective of this paper is to present a three-stage methodology to evaluate the user experience for people with ASD, as there is a need for a formal methodology to evaluate user experience that is built upon the needs and characteristics of users with ASD and provides proper guidelines and evaluation methods for them.
This methodology was developed by researching the characteristics of the users through a systematic literature review [6], proposing a set of adapted UX factors [10], selecting evaluation methods suitable for the needs of users with ASD, and defining a logical step-by-step process to select evaluation methods to execute, plan the experiments, select evaluators and participants, carry out the selected methods, and perform qualitative and quantitative analysis, which results in a detailed report on the UX of the system, product or service evaluated.
A preliminary version of the methodology was published in 2021 [11], which was reviewed and validated through the opinion of three experts in UX and later via an expert judgement validation by 22 experts with knowledge on UX, ASD and/or both, which resulted in the proposal presented in this paper.
This document is organized as follows: Section 2 presents the theoretical background; Section 3 describes relevant related work; Section 4 shows the process to create the methodology; Section 5 presents the UX evaluation methodology for people with ASD; Section 6 presents the validation process; and Section 7 presents our conclusions and future work.

2. Theoretical Background

Below are brief descriptions of ASD, UX, and UX models, which are relevant for this investigation.

2.1. Autism Spectrum Disorder

Autism Spectrum Disorder (ASD) is a condition characterized by repetitive patterns, difficulties with social interaction, and communication, as defined in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM 5) [1].
People with ASD may or may not present secondary symptoms, such as intellectual disability, lack of verbal language [1], a tendency towards visual and structured thinking [2], delay of fine motor skills development [3], difficulties when generalizing skills to real-world contexts [4], susceptibility to experiencing depression and frustration [5], and hyper- or hypo-reactivity to sensory input [1].
The DSM 5 establishes three categories of severity for ASD [1] based on the degree of support that the person needs, which varies from level 1 “Requires support” to level 3 “Requires very substantial support”.

2.2. User Experience

ISO 9241-210 [12] defines user experience (UX) as “user’s perceptions and responses that result from the use and/or anticipated use of a system, product or service”. Additionally, the standard describes UX as “user perceptions and reactions, including user emotions, beliefs, preferences, perceptions, comfort, behaviors, and achievements that occur before, during, and after use”. In other words, UX is understood as the internal and emotional state that people perceive before, during and after the interaction with a system, product, or service.
A part of UX is usability, which is defined by the ISO 9241-11 [13] standard as “the extent to which a system, product or service can be used by specific users to achieve specific goals with effectiveness, efficiency and satisfaction in a specific context of use”. The concept of usability is related to the fulfillment of tasks and the satisfaction experienced by users, therefore, a higher degree of usability of a system, product, or service after user interaction leads to a better UX.

2.3. UX Evaluation Methods

A system or product can be evaluated using usability and/or UX evaluation methods. UX evaluation methods focus on detecting how the user feels about the interaction with the evaluated system or product [14]. On the other hand, usability evaluation methods are “a procedure composed of a series of well-defined activities for the collection of data related to the interaction of the end user with a software product and/or how a specific feature of this product of software contributes to achieving a certain degree of usability” [7]. Considering that the concept of user experience includes usability, we have chosen a set of UX evaluation methods that will help us effectively evaluate the UX and usability on systems, products or services used by people with ASD.
For our proposed methodology for evaluating systems, products, or services for people with ASD, we have selected evaluation methods under the following usability method classifications, as defined by Fernandez et al. [7]:
  • Inspections: Reviews carried out by a group of evaluators using their expert judgement, where the participation of the users of the system or product is not included.
  • User Testing: Users evaluate the product or system after interacting with it.

2.4. UX Factors for People with ASD

We have proposed a set of nine UX factors for systems used by people with ASD [10]: engaging, predictable, structured, interactive, generalizable, customizable, sense-aware, attention-retaining, and frustration-free. These nine UX factors have been considered when designing the tasks to be performed during the execution of the methodology, as well as when adapting instruments of the evaluation methods, such as the property checklist [15].
The nine UX factors have been created based on two approaches: (1) the characteristics, affinities and needs of people with ASD, and (2) the design guidelines and/or recommendations provided by studies on technological systems for people with ASD and/or interventions with these users.
This set of UX factors provides a theoretical basis for the adaptation or creation of evaluation methods, instruments, and methodologies that are focused on the user experience of people with ASD.

3. Related Work

To complement our findings in a previous systematic literature review [6], we have reviewed the literature that has emerged since the year 2019 in order to update the conclusions previously obtained regarding these related studies.
In recent times, the amount of research focused on developing systems and/or products for people with ASD has increased, which is possibly due to the growing interest in the affinity that people with ASD have with technology.
For systems and/or products developed for people with ASD to be friendly and usable, research has evaluated the satisfaction and/or perception of experts in the domain (psychologists, differential teachers, speech therapists), tutors and/or people with ASD, through different evaluation methods.
Studies have evaluated their proposals through the application of simplified and/or complete versions of the system usability scale (SUS) [8]. Some studies have modified the SUS scale (using simplified language, incorporating emoticons, or reducing the scale) when used with users with ASD [16,17]. Other studies have evaluated their proposals with experts in ASD and/or tutors of users with ASD, using the SUS scale in its complete [18,19,20] or reduced [21] version.
Other researchers have evaluated their proposals through sets of heuristics. Ramos-Aguiar and Álvarez-Rodríguez [22] state that they have evaluated their proposed application using Nielsen’s heuristics [23]. Camargo et al. [24] mention having evaluated their mobile application with a heuristic evaluation using the Semiotic Interface sign Design and Evaluation (SIDE) framework [25].
Studies mention having evaluated their proposals using questionnaires. Susanti et al. [26] have evaluated the usability of their application through “direct observation” of users interacting with the application and the execution of the questionnaire proposed by Sehrish Khan [27] which aims to assess usability based on five categories: (1) ease of use, (2) learnability, (3) feedback and good error messages, (4) adequate help and documentation, and (5) appealing interface. Ghabban et al. [28] propose to evaluate their proposal through the creation of a new questionnaire model called M-UTUAT, which is based on seven attributes of the People at the Center of Mobile Application Development (PACMAD) model [29] and three factors of the Unified Theory of Acceptance and Use of Technology (UTAUT) model [30].
Multiple investigations have evaluated their proposals based on a set of evaluation methods. Ahmed et al. [31] mention having evaluated the usability of their proposal with the participation of people with ASD, through the application of three evaluation methods: (1) system usability scale (SUS) [8], (2) VR sensitivity scale and (3) a heuristic evaluation with the Nielsen set of heuristics [9]. Adiani et al. [32] state that they have evaluated their proposal with professionals, parents/caregivers and children with ASD through three evaluation methods: (1) system usability scale (SUS) [8], (2) Acceptability, Likely Effectiveness, Feasibility, and Appropriateness Questionnaire (ALFA-Q) [33] and (3) semi-structured Customer Discovery style interviews. Kim et al. [34] present a process based on four phases, where phase three aims to evaluate the usability of the proposed mobile application through a set of methods. In it, 18 people (9 people without ASD and 9 people with ASD) have been asked to participate in the execution of four evaluation methods sequentially. The evaluation methods used were: (1) Demographic Survey, (2) Think-Aloud Protocol [35], (3) Cognitive Walkthrough [36] and (4) system usability scale [8].
Studies in recent years show interest in evaluating the usability and UX in systems and/or products used by people with ASD. Evaluation methods, such as the system usability scale (SUS) [8] and the use of Nielsen’s set of heuristics [23], are widely applied to find usability problems and provide an overview of the user’s satisfaction of the system, product or service; however, we consider that the level of detail obtained is not sufficient to cover the particular needs that a user with ASD has when interacting with the evaluated system.
Methods, such as the Think-Aloud Protocol [35] and Cognitive Walkthrough [36], are useful to obtain information on the perception of the system directly from the final user of the system; however, by depending on the insights of people who may have communication deficits [1] and are susceptible to frustration [5], this method can deliver unreliable results for users with ASD, so it is necessary to have special considerations regarding its implementation, as well as the environment and the way in which we communicate with the user during the test.
In general, we believe that (1) the investigations must have a greater specification detail on the evaluations carried out, (2) the methods and instruments used must consider the characteristics and needs of people with ASD (example: there must be a set of heuristics focused on people with ASD), and also (3) we believe it is important to have the participation of UX/Usability experts, ASD experts, tutors and people with ASD.

4. Process to Create the Methodology

We have followed a seven-stage process to create the proposed methodology. It has been iterated twice in order to validate and refine the methodology (see Figure 1).

4.1. First Iteration

For the first iteration, all the stages have been executed (see Figure 1):
  • Discovery Stage: We carried out a systematic literature review to know the impact that technology has on people with ASD and how UX/Usability has been evaluated in the proposed systems [6]. The studies indicate that they have evaluated their proposals through various evaluation methods, but these methods have not been particularized considering the characteristics of people with ASD [6].
  • Descriptive Stage: We compiled the information found in the literature on the following topics: (1) characteristics, affinities and needs in people with ASD, (2) recommendations and comments from the authors on UX/Usability evaluations in systems, products or services used by people with ASD, and (3) UX attributes/facets/factors appropriate to the context of our research.
  • Relational Stage: During the research carried out, a set of UX attributes/facets/factors focused on people with ASD has not been found, so by relating the information collected in the descriptive stage, we have proposed a set of nine UX factors for people with ASD [10].
  • Method Selection Stage: We have selected a set of six evaluation methods suitable for people with ASD found in the discovery stage and on the website www.allaboutux.org [37]. Evaluation methods (special emphasis on user tests) based on individual and group questionnaires, focused on emotions and easy expressions, have been excluded.
  • Formalization Stage: With the results obtained in the previous stages, we have formalized and published a preliminary proposal of the methodology to evaluate UX for people with ASD [11]. The proposal considers planning, execution, and result analysis stages.
  • Validation Stage: We have validated the preliminary proposal of the methodology [11] through the opinions of three UX expert researchers. The experts have been asked about elements to add, modify, or eliminate to improve the methodology.
  • Refinement Stage: We refined the preliminary proposal of the methodology [11] based on the results obtained in the previous stage. All comments and recommendations have been considered to improve the methodology.

4.2. Second Iteration

We have carried out a second iteration, which consisted of executing the validation and refinement stages again (see Figure 1).
  • Validation Stage: An expert judgment evaluation was carried out with 22 experts with knowledge in UX/Usability, ASD and/or both. The expert judgment evaluation focused on gathering comments and suggestions of the experts about the stages, substages and the methodology in general (see Section 6).
  • Refinement Stage: We have refined the methodology based on the comments and suggestions obtained in the expert judgment evaluation. The corresponding changes have been made after analyzing the comments and suggestions of the experts (see Section 6), resulting in the final version of the methodology proposed in Section 5.

5. A Methodology to Evaluate UX for People with ASD

Considering that literature do not present enough detail in the evaluations nor empirical evidence in their research, we believe that it is necessary to formalize the UX evaluation process in systems, products and services used by people with ASD. For this, we have created a three-stage methodology (see Figure 2). This methodology focuses on evaluating the UX of systems, products or services used by adults with ASD level 1, as defined in the DSM5 [1]. The methodology aims to maximize the amount of valuable information obtained about the UX of the system, product, or service, so that it can be used to improve the UX and therefore their use is satisfactory for people with ASD.
The methodology proposes three sequential stages, starting with the Planning Stage (S1), followed by the Execution Stage (S2) and ending with the Results Analysis Stage (S3). All stages and substages can be seen in Figure 2.
To facilitate the reading of the methodology the stages and substages are identified with unique IDs (such as S1, S2, S1.1), and their outputs with numerical icons (①, ②, ③), which are consistent with the diagram and tables presented in this document. Figure 3 presents a general description of the methodology and its stages, as well as the inputs and outputs of each of these.
When applying the methodology, consider:
  • Carry out all the stages and substages proposed in the methodology. Consider that the execution stage has a set of substages, which in turn are made up of one or more evaluation methods.
  • The methodology is flexible about which evaluation methods can be carried out in the execution stage. The choice of evaluation methods will depend on the criteria of the researchers, based on the objective of the evaluation, autonomy, and dependence of the participants, resources, and time available.
  • Depending on the stage and substage, one or more documents will be required to start it. Similarly, executing each stage and substage will result in one or more documents.
Next, each of the stages and substages of the proposed methodology are presented in detail.

5.1. S1 Planning Stage

The purpose of the planning stage is to plan the UX evaluations to be carried out, as well as to search for UX/Usability experts, domain experts (professionals who work with people with ASD), participants and tutors. For more detail, see substages S1.1, S1.2, S1.3 and S1.4.
It is important to consider that before beginning this first stage, the system, product, or service to be evaluated, its characteristics and limitations, must be identified, as well as its target user.
As complementary material, Table A1 is presented in Appendix A. This table describes what is needed, what to do and what is obtained as an output when implementing this stage.

5.1.1. S1.1 Method Execution Planning

The purpose of the method execution planning substage is the selection and planning of the evaluation methods to be executed.
To determine and select the UX evaluation methods to apply to the system, product, or service to be evaluated, you must consider: (1) the objective and scope of the UX evaluation must be defined. The methods and activities must focus on the defined objective and scope; and (2) the resources and times available to carry out the UX evaluations must be established.
Depending on the objective of the evaluation, scope, resources, and available time, it is necessary to choose which methods to select. Based on the time and resources available, we propose the execution of the following sequences of evaluation methods:
  • If you have enough time and resources, carry out each of the methods presented in the methodology (as shown in Figure 2).
  • If you do not have enough time and resources are limited, execute only the following evaluation methods: property checklist, heuristic evaluation and field observation, since these are considered the baseline of the methodology.
  • In any other case, select evaluation methods based on the complexity of each method. We suggest that:
    Always execute the property checklists method.
    Carry out at least one method of the inspections substage (S2.2) and the user tests substage (S2.3).
    Depending on the time and resources available, one or more inspection methods can be carried out, selected according to the objective of the evaluation and the needs of the study. The order of the inspection methods of substage S2.2, from less to more complex, in terms of effort and resource requirements, is: heuristic evaluation, group-based expert walkthrough and perspective-based inspection.
    Depending on the remaining time and resources, it is recommended to perform the “field observation” method if less time and resources are available, otherwise use the “controlled observation” method.
  • If necessary, you can select and modify the selection of methods to use as you progress through the run stage.
Also consider selecting and/or adapting instruments of the evaluation methods to be carried out which are suitable for the needs of users with ASD. The instruments used in each of the selected methods must be particularized for people with ASD and the type of system, product, or service to be evaluated (e.g., select a set of heuristics for transactional systems for people with ASD).
As complementary material, Table A2 is presented in Appendix A. This table describes what is needed, what to do and what is obtained as an output when implementing this substage.

5.1.2. S1.2 Experiment Design

Once the evaluation objective, scope and methods that will be executed in the UX evaluation process are defined, a set of important aspects for the experiments to be carried out must be created and defined. These aspects are:
  • Define the evaluation objective(s) for each method to be carried out.
  • Define the expected results that will be obtained for each activity to be carried out in the UX evaluation methods.
  • Define scenarios and tasks. Consider that:
    In case of executing at least two evaluation methods that require scenarios and/or tasks, create universal scenarios and/or tasks, which can be used by multiple evaluation methods to reuse and optimize resources.
    The design of tasks and scenarios must consider the characteristics and needs of people with ASD. The instruments to use in each evaluation method in this methodology should be adapted according to the recommendations described in this document, in order to maximize the value for people with ASD.
    The scenarios and/or tasks created should focus on specific characteristics of the system, product, or service. Similarly, scenarios and/or tasks should be concise and clear.
    In case of executing the “controlled observation” method, include an estimated time for completion, and the expected results of each task. These can be compared with the time that the participant took to perform the task, and the results obtained from it.
    It is important to keep in mind that participants may require more time to understand the tasks to be performed, as well as more time to be prepared for the activity and to finish it. It can be frustrating for some users with ASD not to have enough time to complete the activities due to their strict routines [1].
  • Define protocols (set of documents required for the execution of the evaluation methods). Consider:
    Confidentiality agreement: In the case of the implementation of methods that require an audiovisual record of the actions of the participants, confidentiality agreements must be established. The purpose of the confidentiality agreement document is to inform the participant that their actions will be recorded, their identities will not be revealed, and that the purpose of the experiment is to evaluate the system, product, or service and not their abilities, skills, or knowledge.
    Preliminary questionnaire (demographic): Experts and participants should be provided with a preliminary (demographic) questionnaire prior to performing the evaluation methods. The preliminary (demographic) questionnaire aims to identify the profiles and previous experiences that evaluators or participants may have with similar systems, products, or services.
    Perception questionnaire: At the end of the execution of an evaluation method, the evaluators or participants must be provided with a system, product, or service perception questionnaire. The purpose of the questionnaire is to find out the different perceptions that the evaluators or participants have about the system, product or service evaluated and the tasks performed.
    Observer logs: We recommend that UX leaders and/or researchers (in the role of observers) record what was observed during the method execution process in logs. Record potential problems, comments out loud from participants or evaluators, or information that UX leaders or researchers deem necessary in the logs.
    List of tasks: In the case of carrying out evaluation methods where a set of tasks is needed, create two documents with the list of tasks: (1) list of tasks for the evaluators or participants during the experiment, with the goal of providing a sequence of tasks to perform during interaction with the system, product or service; and (2) list of tasks for researchers and/or observers, which details the expected results and expected time for each proposed task.
    List of potential problems: Evaluators or observers should be asked, depending on the method to be carried out, to record the potential problems found through the evaluation (see the execution stage). It is expected that at least one definition, explanation or comment on the potential problem encountered will be provided. Once the potential problems have been identified, each evaluator and/or observer must assign a value of severity, frequency, and criticality to said problems, under the same evaluation scale (see Table A3).
  • All documents delivered to participants must have clear and concise instructions, and if necessary, have visual support.
  • All protocols presented must be established and documented in the Experiment Design document, which will be a necessary input to carry out each of the methods in the execution stage. The details and information provided to the evaluators or participants will depend on the evaluation method to be carried out, as established in the S2 execution stage.
As complementary material, Table A4 is presented in Appendix A. This table describes what is needed, what to do and what is obtained as an output when implementing this substage.

5.1.3. S1.3 Evaluators Selection

Search and select evaluators to participate in the execution of the UX evaluation methods proposed in the execution stage. We recommend that:
  • The profiles of these evaluators must be: (1) experts in UX/Usability, (2) experts in the specific domain (professionals who work with people with ASD, for example psychologists, speech therapists and differential teachers), and/or (3) preferably experts with knowledge in both areas (UX/Usability and in the ASD domain).
  • Have three to five evaluators [38] for each of the inspection methods to be carried out. Having the support of different professionals will help to include different points of view in the analysis and, eventually, find a greater diversity of potential UX problems.
  • Have an expert who assumes the role of leader. This can be an expert with knowledge in both areas (UX/Usability and in the ASD domain) or a UX/Usability expert. The expert must accept this role and lead each of the evaluation methods to be carried out on the system, product, or service.
  • In case of executing more than one inspection method, it is recommended to have different evaluators for each method to have different points of view and avoid possible biases.
  • Evaluators who are experts in ASD or related areas will be responsible for guiding and educating the other evaluators on how to deal with users and their specific needs during user testing. Each user is different, and their needs may not be visible to an evaluator without ASD domain experience.
As complementary material, Table A5 is presented in Appendix A. This table describes what is needed, what to do and what is obtained as an output when implementing this substage.

5.1.4. S1.4 Participant Selection

Define and select the users who will be participants in the experiments to be carried out in substage S2.3 (User Tests), considering the target users of the system, product, or service to be evaluated. We recommend:
  • Have three to five participants with ASD [38] for each of the evaluation methods of tests with users (controlled observation and field observation).
  • If necessary, we recommend including tutors close to people with ASD, to create a safe environment for the participants. The tutors will take a guiding role for the participants with ASD, in case they are overwhelmed by the task or instructions given. Tutors must not intervene in the participant’s interaction with the system, product, or service.
  • In the case of executing the two test methods with users (controlled observation and field observation), it is recommended to use different participants for each method, to have different points of view and avoid possible biases.
As complementary material, Table A6 is presented in Appendix A. This table describes what is needed, what to do and what is obtained as an output when implementing this substage.

5.2. S2 Execution Stage

The execution stage has the purpose of executing previously selected methods to evaluate the user experience of systems, products, or services for people with ASD.
Before executing an evaluation method and considering the knowledge of the evaluator, it is recommended:
  • For expert evaluators in UX and/or Usability: give them a brief induction on the ASD condition and its main characteristics.
  • For expert evaluators in the ASD domain: give them a brief introduction to UX and the evaluation methods to be executed.
  • Both groups of evaluators should be given a brief introduction about the system, product, or service to be evaluated, indicating its purpose, objective, and scope of the evaluation.
  • The methods proposed in this methodology have been selected considering the characteristics of people with ASD, and they can be used to assess a variety of systems, products, or services for any type of user. Therefore, any necessary adjustments should be considered for their instruments or environment of execution.
  • The proposed evaluation methods can be executed sequentially or in parallel. The order of execution will depend on the decisions made by the investigators.
As complementary material, Table A7 is presented in Appendix A. This table describes what is needed, what to do and what is obtained as an output when implementing this stage.
Next, the aspects to be considered when executing the proposed methods and instruments are detailed.

5.2.1. S2.1 Preliminary Evaluation

First substage of the execution stage. It consists of the implementation of the property checklist inspection method, to evaluate the usability of the system, product, or service in a preliminary way.

Property Checklists

The use of the property checklist evaluation method [39] as the first inspection method to be executed in the proposed methodology, aims to quickly detect the deficiencies or pain points that can be found in the evaluated system, product, or service. Conducting this initial assessment will allow the evaluators to quickly make decisions about how to proceed with further assessments, if necessary.
Given the diversity of systems, products or services and objectives that research may have, it is necessary to select the property checklist instrument that best suits this purpose and, if necessary, adapt or create a new property checklist. The selection, adaptation or creation of a property checklist will depend on the judgment of the researchers.
Given the lack of property checklists that consider the characteristics and needs of people with ASD, we have proposed our property checklist to evaluate systems, products or services used by people with ASD [15]. This proposal takes as a theoretical basis our proposal of nine UX factors for people with ASD [10].
Table A8 presents a brief specification of the execution of the property checklist method (see Appendix A), considering the inputs (elements necessary to start the execution of the method), execution steps (details on the execution of the evaluation method) and outputs (set of information and documents obtained after the execution of the method) that are relevant when implementing this evaluation method.

5.2.2. S2.2 Inspections

Second substage of the execution stage. The inspection substage considers three inspection methods: group-based expert walkthrough, perspective-based inspection, and heuristic evaluation. Next, each evaluation method will be explained in detail.

Group-Based Expert Walkthrough

The use of group-based expert walkthrough [40] allows us to identify potential usability problems, possible design improvements and solutions to these problems, through a group inspection carried out in conjunction with professionals “with practical experience” in the domain. The evaluation is based on the execution of a set of tasks–scenarios guided by a leader. The evaluation can be carried out using specific criteria of the domain under study, which are familiar to professionals who do not necessarily have knowledge about the UX. Consider that an expert with knowledge in both areas (UX/Usability and in the ASD domain) or a UX/Usability expert should take the lead role in the evaluation.
By using a group-based expert walkthrough inspection, we can easily include experts in the domain to the evaluation, as they do not need to have previous experience in executing UX/Usability evaluations. This can result in a greater amount of identified potential problems that are relevant for people with ASD and their characteristics.
Table A9 presents a brief specification of the execution of the group-based expert walkthrough method, considering the inputs, execution steps, and outputs (see Appendix A) that are relevant when implementing this evaluation method.

Perspective-Based Inspection

An inspection method that focuses on the identification of specific usability problems through three main perspectives [41]. These perspectives are based on three types of users: novice, expert, and error-handling. Each evaluator assumes the role and point of view of a user and inspects the system, product or service under that user role, guided by a set of inspection questions for each perspective. Zhang et al. [41] recommends creating the inspection questions based on an HCI model, for which we recommend considering our nine UX factors [10], and the perspectives, which can include a novice and expert user with ASD.
When executing the method, ASD domain experts should support UX/Usability experts if possible. Domain experts may find it easier to put themselves in the shoes of a user with ASD and therefore find specific problems that UX experts may not recognize.
Table A10 presents a brief specification of the execution of the perspective-based inspection method, considering the inputs, execution steps, and outputs (see Appendix A) that are relevant when implementing this evaluation method.

Heuristic Evaluation

An inspection method that focuses on finding potential usability problems in systems, products, or services. This method [9] is based on the inspection of evaluators, who look for potential problems based on different sets of previously selected heuristics, while specifying their severity, frequency, and criticality. When using the heuristic evaluation method, the use of tasks and scenarios is optional, and its realization will depend on the investigators.
Consider evaluating the system, product, or service through one or more sets of heuristics that consider the characteristics and needs of people with ASD. These are our suggested heuristic sets:
  • Khowaja and Salim [42] present a set of 15 system-specific heuristics for children with ASD. These 15 heuristics were created through the adaptation and extension of the Nielsen heuristics [43], based on a study of the characteristics of people with ASD.
  • We are currently developing a set of heuristics to evaluate systems, products, or services for adults with ASD. For the creation of this set of heuristics, we followed the methodology proposed by Quiñones et al. [44], considering as a basis our proposal of nine UX factors for people with ASD [10].
Table A11 presents a brief specification of the execution of the heuristic evaluation method, considering the inputs, execution steps and outputs (see Appendix A) that are relevant when implementing this evaluation method.

5.2.3. S2.3 User Tests

Once the inspections substage is finished or in parallel, the execution of at least one user test is required. Implementing tests with users’ aims to find problems and measure the satisfaction of the participants after their interaction with the system, product, or service.
The user testing substage contemplates two methods: field observation and controlled observations. For both evaluation methods we recommend:
  • Informing and instructing the user about the experiment, prior to carrying it out. Information and instructions must be clear and concise, prioritizing textual and/or visual communication.
  • The interaction with the user, throughout the experiment, must consider the specific characteristics that the participant may present (e.g., not having any contact or physical proximity with people with ASD who may react negatively to this action).
  • Having the consent of users or tutors (if necessary). Inform users and tutors that all information obtained will be treated anonymously.
  • Keeping in mind throughout the experiment the dependence and autonomy of each of the participants. Sometimes the participants may require support from a tutor or a professional to help them.
  • Obtaining the support of one or more tutors in case of any unforeseen event (if necessary). The tutor(s) can guide the user in the tasks to be carried out when necessary and/or assist the evaluators in identifying potential problems that may occur during the execution of the test.
  • The investigator(s) should take on an observer role. Observers must not interfere during the experiment unless it is strictly necessary.
  • Recording if the tutors or researchers have had to help the participants or interrupt the experiment, because the results obtained may be different or vary.
  • Recording interactions with the system, product, or service through audiovisual recordings, always maintaining the anonymity of the user.
  • Observers must record what they observed during the sessions in writing. We recommend recording the following information [45,46]: (1) activity performed, (2) actions, events and behaviors observed by users, (3) possible cause of the problem, considering the characteristics of the user, (4) description of the user (to identify the user more quickly in the audiovisual record).
More details of each evaluation method are given below.

Field Observation

Field observation aims to obtain information from users and detect potential problems of the system, product, or service to be evaluated [39]. These potential problems are detected while observing the user interacting with the system, product, or service in a natural environment. When using the field observation method, we recommend:
  • Scheduling one or more observation sessions for users. Each session must have an estimated duration.
  • During sessions, users should always be in an environment that is familiar to them. It is for the same reason that it is recommended not to interrupt users’ activities and not to distract users by including elements outside their usual environment.
Table A12 presents a brief specification of the execution of the field observation method, considering the inputs, execution steps, and outputs (see Appendix A) that are relevant when implementing this evaluation method.

Controlled Observation

Controlled observation aims to identify potential problems that users may experience when interacting with the system, product, or service [39]. Controlled observation consists of the execution of guided activities, to eliminate the noise of the data obtained by including strict controls, such as the ordering of tasks, thus minimizing the possible effects of knowledge transfer between tasks and avoiding repetitive actions. When using the controlled observation method, we recommend:
  • Recording the times that the participants have required to develop each task.
  • Have a controlled environment, free from noise and visual distractions. If possible, make observations of the user in an appropriate laboratory.
Table A13 presents a brief specification of the execution of the controlled observation method, considering the inputs, execution steps, and outputs (see Appendix A) that are relevant when implementing this evaluation method.

5.3. S3 Results Analysis Stage

In this stage the organization and analysis of the results obtained after the execution of the evaluation methods in the execution stage is performed. The purpose of this stage is to organize the information, generate quantitative and qualitative analysis, and create a UX report that includes the main problems found, an analysis of these problems, as well as proposals for solutions to improve the UX of the product, system, or service.
As complementary material, Table A14 is presented in Appendix A. This table describes what is needed, what to do and what is obtained as an output when implementing this stage.

5.3.1. S3.1 Grouping of Potential Problems

The first substage is the grouping of the problems obtained in the execution of the evaluation methods. These problems come from different methods and documents, and the result is a consolidated list of potential problems.
The other documents obtained as outputs in previous stages, such as task lists, preliminary and perception questionnaires, will be used in the analyses without prior grouping.
Consolidating the identified potential problems requires grouping the problems and then identifying the ones that come up repeatedly. To perform this task:
  • Group the potential problems found in the inspection methods: heuristic evaluation, group-based expert walkthrough and perspective-based inspection. Create a consolidated list with the unique potential problems found in the lists obtained in these methods, including the values of severity, frequency, and criticality of each evaluator for each problem. Furthermore, consider modifying the problem titles and definitions if this helps improve the clarity, quality, and consistency of the final consolidated listing.
  • In case of having repeated potential problems, each one should be merged into a single potential problem by averaging the values of severity, frequency, and criticality of the repeated problems, and then defining a consolidated title and definition for it.

5.3.2. S3.2 Quantitative Analysis

The quantitative information can come from different sources: through the results obtained in the execution of the property checklist method, consolidated list of potential problems, task lists, answers obtained in the perception questionnaires and answers to the preliminary questionnaires (demographic). To perform the quantitative analysis, analyze the data obtained in the following categories:
  • Results of the property checklist: After verifying compliance with the items of the checklist used, the satisfaction percentage of the system, product or service can be obtained, as shown in Table A8 (see Appendix A).
    As stated in our proposed property checklist [15] we recommend that evaluators rate each of the items on a scale of 1 to 5, from “Totally non-compliant” to “Totally compliant”. Establishing a scale from 1 to 5 will allow the researchers to determine the compliance of each item of the property checklist. In addition, if categories are established, as in our proposal [15], we recommend evaluating compliance with each of the proposed categories as a group.
    To analyze these results, we recommend calculating and graphing the percentages of compliance by category, as well as calculating the global percentage obtained after the evaluation, which will allow us to clearly know the results obtained after completing the property checklist. A graphic way of visualizing the results obtained can be using radar charts [47].
  • List of potential problems: For the potential problems obtained from the inspection methods grouped in substage S3.1, calculate the average and standard deviation of each of the severities, frequencies and criticalities assigned by each evaluator for each potential problem.
    A lower value of standard deviation may mean less discrepancies between evaluators; on the other hand, a higher value of standard deviation implies a notorious discrepancy between evaluators, so it is important to analyze these potential problems in detail. In addition, we recommend ordering the potential problems based on the average severity and criticality, to identify the potential problems that must be addressed with the highest priority.
  • List of tasks: In the evaluation methods where the system, product or service is examined following a set of tasks, document the results and times required for the fulfillment of said tasks. From this, comparisons can be generated between the obtained results and times versus the expected results and times.
  • Preliminary questionnaire (demographic): It is important to capture information from the participants and evaluators, such as their age, gender, experience in the use of similar systems, products, or services, among others. We recommend that for each of the evaluation methods the captured information be graphed, to identify patterns and facilitate its analysis.
  • System, product, or service perception questionnaire: Organize and graph the information captured through Likert scales to obtain a graphic display of the perception of the participants and evaluators and thus facilitate its analysis and identify patterns.

5.3.3. S3.3 Qualitative Analysis

The qualitative information can come from different sources: the perception questionnaires obtained in the executed methods, the task lists, the observers’ logs, and audiovisual and written records. To perform the qualitative analysis, consider for each result previously obtained:
  • Task list: For evaluation methods where the system, product or service is examined following a set of tasks, we recommend documenting the comments and the correct and incorrect actions carried out by the participants and/or evaluators.
  • Audiovisual and written records: Organize and complement the written records obtained by the observers through the audiovisual records. These records can be based on the comments of the evaluators, as well as other aspects found when reviewing the captured audiovisual record.
  • Observer Logs: Organize the information documented by the observers, such as comments and/or correct and incorrect actions carried out by the participants throughout the execution process of the evaluation method.
  • List of Comments and Recommendations: Create a consolidated list that includes all the comments and recommendations identified through the perception questionnaires in each of the evaluation methods carried out, as well as those consolidated in the list of tasks, audiovisual records, and logs mentioned in the previous points. For this, it is recommended to group the comments and/or recommendations of all the outputs into common and easy-to-understand categories, such as the proposed UX factors for people with ASD [10]. Repeated comments must be merged into a single new comment. Organizing and consolidating these comments and recommendations will make it possible to find common patterns, positive and negative aspects, as well as identify general and specific problems that have not been formally found through the methods.

5.3.4. S3.4 UX Report

After carrying out the detailed quantitative and qualitative analyses in the previous substages, the final stage of the methodology corresponds to the integration and interpretation of the results, which can be used to generate a detailed report on the UX in the system, product or service evaluated, highlighting the potential problems found and providing recommendations to improve the UX.
Considering the previous analyses, a single consolidated report on the UX of the system, product or service must be generated, which is considered the final output of the methodology to evaluate the UX in systems, products or services for people with ASD. To prepare this report, consider:
  • We recommend that the UX evaluation report be organized first according to the evaluation methods executed, and then have a section for general results.
  • Results of methods: Provide a consolidated analysis and interpretation of the information obtained in each of the experiments carried out with the selected methods, including potential problems found, conclusions and recommendations. Include interpretations of each of the graphs created with the information from the evaluations carried out.
  • Quantitative Analysis: Include a section of general quantitative results where the potential problems found between the different evaluation methods are related, including, for example, most common potential problems, ranking of problems according to their general criticality, observations found when comparing the results of the methods and any other information that is relevant to improve the UX of the system, product, or service. The quantitative information can be classified and organized based on the established UX factors [10], or other criteria that the researchers deem convenient.
  • Qualitative Analysis: Include a comments and qualitative analysis section, which presents an overview of the evaluation and includes the qualitative results analyzed in substage S3.3. For this analysis, it is important to highlight common patterns found in the comments of all the experiments, positive aspects, negative aspects, and any other information that is considered relevant to improve the UX of the system, product, or service. This analysis can be supported by the quantitative results of the report.
  • Recommendations and Proposed Solutions: Include a section in the report where researchers present recommendations to solve the problems previously described in the report with a UX perspective, as well as recommendations that are considered relevant for future evaluations.
Once the UX report is completed, it can be used by the developers and/or stakeholders of the system, product, or service, to improve the UX of people with ASD by fixing the problems found and applying the recommendations provided.

6. Validation and Discussion

Considering our preliminary proposal of the methodology [11], we have improved the methodology based on the opinions of three UX expert researchers. In this, each stage and substage have been detailed and restructured, and we included a new substage: S1.2 (Experiment Design). After consolidating these changes in a new version of the methodology, an expert judgment validation has been carried out, with 22 experts that have knowledge about UX/Usability, ASD, or both.
The experts’ profiles include academic researchers, PhD students, computer scientists with UX/Usability expertise, and domain specific experts, such as speech therapists, psychologists, counselors, and educators with hands on experience working with people with ASD. Some of these experts have experience in both UX/Usability and ASD, and some have ASD themselves.
In the expert judgment validation, each participant has been given a specification document of the methodology, which includes a summarized version and a detailed version, and a survey which was created based on the proposal from Quiñones et al. [44]. The validation carried out is aimed at obtaining feedback from experts.
The survey has been divided into three sections.
(1) First section: Learn about the background of the participating experts.
(2) Second section: Evaluate the stages and substages of the methodology, using a five-level Likert scale (1—worst to 5—best) in four factors (F1, F2, F3 and F4):
  • (F1) Usefulness: How useful do you consider each stage and substage of the methodology?
  • (F2) Clarity: How do you rate the clarity of each stage and substage of the methodology?
  • (F3) Ease of use: How easy would it be to implement each stage and substage of the methodology?
  • (F4) Lack of Detail: Do you think that the stages and/or substages of the methodology need more detail or additional elements?
(3) Third section: Know their opinions about the methodology, the stages and substages, which includes:
  • Two questions (Q1 and Q2) focused on finding out their general opinion about the methodology, through a five-level Likert scale (1—worst to 5—best).
    (Q1) Use in future evaluations: If you had to evaluate the user experience in systems, products or services used by people with ASD, would you use our proposed methodology?
    (Q2) Completeness: Do you think that the methodology covers all the aspects to be evaluated in systems, products or services used by people with ASD?
  • Five open questions focused on knowing their opinions and comments on the methodology, stages and substages.
    (O1): Would you remove or add any evaluation method proposed by the methodology? Which one(s) and why?
    (O2): Would you change, add, or eliminate any aspect of a stage or substage of the methodology? Which one(s) and why?
    (O3): Would you change, add, or eliminate any aspect of the evaluation methods considered in the proposed methodology? Which one(s) and why?
    (O4): What aspects do you consider were not covered by the proposed methodology and should be included in the methodology to evaluate systems used by people with ASD?
    (O5): Do you have any additional comments and/or suggestions for the authors?
The following results were obtained from this survey.

6.1. Experts Background

To know about the backgrounds of the 22 experts, they have been asked about their previous knowledge about UX/Usability and ASD. As a result, we have obtained the following information:
  • A total of 20 experts (90.90%) previously knew the concepts of UX/Usability.
  • A total of 21 experts (94.45%) previously knew the ASD concept. From this 94.45%:
    A total of eight experts (38.09%) mentioned that they have interacted with people with ASD, because they have relatives and/or are people diagnosed with ASD.
    A total of 13 experts (61.90%) mentioned that they have taught, researched, or carried out experiments with people with ASD.

6.2. Quantitative Results

Table 1 shows the results obtained by each of the factors (F1–F4). The information obtained is analyzed below.
It is important to mention that the methodology specification delivered to the experts only had three substages in the Results Analysis Stage. Considering the feedback from the experts, a new substage called “Grouping of potential problems” has been added, and substage S3.3 was renamed.
  • (F1) Utility: The average utility of the methodology specification is high (4.77). Stage S3 (Results Analysis) is considered the most useful (4.91). The S2.2 substage (Inspections) is considered the least useful, however, its average is still high (4.59). The standard deviation is relatively low, ranging from 0.29 (stage S3) to 0.67 (stage S2.2). The standard deviation of stage S3 (Results Analysis) is the lowest of the four factors. The perceived usefulness of the methodology is high.
  • (F2) Clarity: The average clarity of the methodology specification is high (4.39). Substage S2.2 (Inspections) is considered to have more clarity (4.59). Substages S1.1 (Method Execution Planning) and S3.3 (Integration of Results) are considered less clear (4.23). The standard deviation varies between 0.59 (substage S2.2) and 0.97 (substage S.3.3). The perceived clarity about the methodology is high. Considering the results obtained, the specification of the less clear perceived substages (S1.1 and S3.3) have been improved.
  • (F3) Ease of use: The average ease of use of the methodology specification is moderate (3.69). Stage S3 (Results Analysis) and substage S3.2 (Qualitative Analysis) are considered to be the easier to use (4.00). Substage S2.3 (User Tests) is considered the most difficult to perform (3.18) and is the one with the highest standard deviation (1.14); experts commented that this stage has been considered the most difficult to carry out, due to the unforeseen events that may arise and the various profiles that people with ASD may have, and not necessarily due to the complexity of the substage specification. Standard deviations are relatively high, ranging from 0.77 (stage S1) to 1.14 (substage S2.3).
  • (F4) Lack of Detail: The average lack of detail in the specification of the methodology is low (2.43). Due to the nature of the question, having a low average does not imply having obtained negative results. A high average means that the methodology is missing more details. The substage S1.4 (Selection of Participants) is the one with the highest average (2.73). Stage S3 (Results Analysis) and substage S3.2 (Qualitative Analysis) are the ones with the lowest average (2.18). Standard deviations are high, ranging from 1.26 (stage S1—Planning) to 1.47 (substage S2.2—Inspections). Expert opinions on the F4 factor are divergent/mixed.
The experts’ perceptions of the factors are homogeneous, except for factor F4. Because substage S2.3 (User Tests) is perceived as having a high utility (4.86), with a low ease of use (3.18) and a comparatively high need for more detail (2.55), it is that its specification and ease of use have been improved. Additionally, greater detail has been provided in the specification of substages S1.1 (Method Execution Planning) and S3.3 (Integration of Results) since they have been considered the least clear (4.23).
The results obtained in the two general questions (Q1 and Q2) on the methodology can be seen in Table 2.
These results show that:
  • The perception of the experts regarding the use of the methodology in future evaluations (Q1) is high (4.32). A total of 86% of the evaluators perceive that they would probably and/or definitely use the proposed methodology to evaluate systems, products or services used by people with ASD.
  • The experts’ perceptions regarding the completeness of the methodology (Q2) are relatively high (3.77). Experts emphasize that working with people with ASD is not an easy thing to do. A total of 77% of the evaluators declare that the methodology probably and/or definitely covers all the necessary aspects to evaluate systems, products or services used by people with ASD.

6.3. Qualitative Results

When experts have been asked if they would remove or add any evaluation methods to the proposed methodology (O1), most have mentioned that they would not remove or add any evaluation methods. They mention that the chosen methods are relevant to the context to be applied.
When experts have been asked if they would change, add, or remove any aspect of a stage or substage (O2), experts have provided various comments. Table 3 shows the comments of the experts, if the suggestion has been considered, and the justification or action performed.
When the experts have been asked if they would change, add, or delete any aspect of the proposed evaluation methods (O3), the experts have mentioned various comments. Table 4 shows the comments of the experts, if the suggestion has been considered, and the justification or action performed.
When the experts have been asked about what aspects they consider were not covered by the methodology and should be included (O4), the experts have mentioned various comments. Table 5 shows the comments of the experts, if the suggestion has been considered, and the justification or action performed.
When the experts have been asked if they have any additional comments and/or suggestions (O5), the experts have mentioned various comments. Table 6 shows the comments of the experts, if the suggestion has been considered, and the justification or action performed.
Expert feedback is positive. It is pointed out that the methodology is complete, replicable, and modern. It is highlighted that the methodology can help a better inclusion of people with ASD, and that it can be difficult to carry out tests with users with people with ASD. We have realized the need to modify the structure of stage S3 (Results Analysis Stage) and created sub-stages S3.1 (Grouping of potential problems) and S3.4 (UX Report), to provide better clarity for this stage. It should be noted that stage S3 (Results Analysis Stage) has only had a change in the structure and not in its content.
Considering the comments and suggestions provided by the evaluators, it has been possible to refine the proposed methodology, as presented in this document.

7. Conclusions

Studies have evaluated the UX and/or usability of their technological proposals through various evaluation methods and instruments, with the support of experts in ASD, people with ASD and their parents and/or tutors. Many of the evaluation methods and/or instruments used are not particularized or do not consider the characteristics of people with ASD. The investigations do not provide enough detail of the evaluations carried out [6].
It is necessary to have evaluation methods and instruments that consider the characteristics of people with ASD, with a special emphasis when these methods and instruments are executed with people with ASD; therefore, a formal process when evaluating the UX of systems, products or services used by people with ASD is needed.
We have followed a seven-stage process for the creation of the methodology to evaluate the UX of systems, products or services used by people with ASD. This process is backed up by (1) the publication of a systematic literature review [6], which represents a justification for the need of a formal evaluation process, (2) the creation of nine UX factors for people with ASD [10], that has supported the characterization the users, and particularization of evaluation methods, instruments and processes aimed at evaluating UX in systems, products or services used by people with ASD, and (3) the publication of a first iteration of the creation process that resulted in a preliminary proposal of the methodology [11].
Two validations were made to the preliminary proposal of the methodology [11]. Experts with knowledge in UX/Usability, in the ASD condition, and in both, have provided us with comments and suggestions. Many of the comments and suggestions made were considered. In the first validation, a greater specification of the methodology has been refined and provided, as well as the creation of the substage S1.2 (Experiment Design). In the second validation, modifications were made mainly in substage S2.3 (User Tests) and a restructuring of stage S3 (Results Analysis Stage).
Considering the results of the two validations carried out (two iterations in the creation process), we can highlight that the proposed methodology is perceived by the experts as useful, a contribution to the inclusion of people with ASD and, furthermore, they mention having the intention of using it in the future.
The proposed methodology establishes a formal process to evaluate the user experience in systems, products and services used by adults with ASD, which includes evaluation methods, instruments and processes that were selected and adapted according to the specific characteristics of the users. Using the proposed methodology with an adequate selection or adaptation of instruments, such as those recommended in this paper, can help to improve the satisfaction and perception of people with ASD about the system, product or service evaluated.
We believe that this methodology proposal contributes to solving the need for a formal process to evaluate the UX of systems, products or services used by people with ASD identified in the early stages of this investigation. By using this methodology, investigators will be able to follow a validated process that uses specific methods and instruments that were selected and adapted according to the needs of people with ASD, and by identifying and addressing the potential UX problems found, and the UX of the system, product or service can be improved, thus helping in providing a positive and rewarding experience for users with ASD.
In future work, we intend to apply the proposed methodology to evaluate the user experience in a system designed for people with ASD and a website not designed for people with ASD. We aspire to apply the methodology with the support of experts with knowledge in UX/Usability, ASD, and both, as well as to have the support of young adults diagnosed with ASD.

Author Contributions

Conceptualization, K.V., C.R. and F.B.; methodology, K.V. and C.R.; validation, C.R., F.B. and E.J.; formal analysis, K.V.; investigation, K.V.; data curation, K.V.; writing—original draft preparation, K.V.; writing—review and editing, K.V., C.R., F.B. and E.J.; visualization, K.V.; supervision, C.R. and F.B.; project administration, K.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We want to thank all evaluators who reviewed the methodology proposal and provided us with valuable feedback. Katherine Valencia is a beneficiary of ANID-PFCHA/Doctorado Nacional/2019-21191170.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Planning Stage Specification.
Table A1. Planning Stage Specification.
S1: Planning Stage
Plan UX evaluations to be performed. Find experts, participants, and tutors.
What do I need to get started?
  • Select a specific system to evaluate UX
What to do?
  • Collect information about the system.
  • Selection of evaluation methods to be performed.
  • Identify and describe goals, protocols, scenarios, tasks and expected results of the evaluation.
  • Selection of participants with ASD *, tutors **, UX/Usability experts and ASD domain experts.
* Participants must be within the objective users of the system, product, or service to be evaluated.
** Considering the dependence and autonomy of the participants, we recommend having the support of tutors or evaluators who can help the participants with problems or questions.
What is obtained?
  • ① System, product, or service information.
  • ② List of methods to execute.
  • ③ Experiment Design Document.
  • ④ List of evaluators.
  • ⑤ List of participants.
Table A2. Method Execution Planning Specification.
Table A2. Method Execution Planning Specification.
S1.1 Method Execution Planning
Selection and planning of methods to be executed.
What do I need to get started?
  • ① System, product, or service information.
What to do?
  • Collect system, product, or service information.
  • Define objective and scope of the UX evaluation.
  • Select the evaluation methods to be carried out based on the objective, scope, resources, and available time. It is recommended that:
    If there are no time and resource constraints, it is suggested to perform all the methods proposed in the methodology (Figure 1).
    If there are time and resource constraints, it is recommended to follow the simplified sequence: property checklist, heuristic evaluation and field observation.
    Otherwise, select the methods to be performed according to the time and resources available, considering the complexity of each method (see S1.1 Method Execution Planning).
What is obtained?
  • ② List of methods to execute.
Table A3. Rating Scales of Problems Detected.
Table A3. Rating Scales of Problems Detected.
Rating ScaleDescriptionRangeScale
SeverityScale that evaluates how detrimental the potential problem is to the use of the system.0–4(4) Catastrophic problem; (3) major problem; (2) minor problem; (1) cosmetic problem; (0) it is not a problem
FrequencyScale that evaluates the occurrence of the problem during use of the system.0–4(4) >90%; (3) 51–90%; (2) 11–50%; (1) 1–10%; (0) <1%
CriticalitySum of the assigned severity and frequency, which represents the level of criticality of the problem.0–8
Table A4. Experiment Design Specification.
Table A4. Experiment Design Specification.
S1.2 Experiment Design
Design and specify the experiments to be performed.
What do I need to get started?
  • ① System, product, or service information.
  • ② List of methods to execute.
What to do?
  • Collect system, product, or service information.
  • Define the evaluation objective(s) for each method to be carried out.
  • Define expected results to be obtained in each evaluation.
  • Scenario creation *.
  • Task set creation *.
  • Protocol creation. The protocols contemplate a set of documents required for the execution of the evaluation methods.
  • Consolidate the information in the “Experiment Design document”.
* It is recommended, if possible, that aspects, such as scenarios and tasks, are universally defined to use in multiple evaluation methods.
What is obtained?
  • ③ Experiment Design Document.
Table A5. Evaluators Selection Specification.
Table A5. Evaluators Selection Specification.
S1.3 Evaluators Selection
Search and selection of evaluators.
What do I need to get started?
  • ① System, product, or service information.
What to do?
  • Analyze system, product, or service information.
  • Select experts in UX/Usability, experts in the ASD domain and/or experts with knowledge in both areas (UX/Usability and in the ASD domain).
  • Select a leader with knowledge in both areas (UX/Usability and in the ASD domain) or a UX/Usability expert.
  • Collect information from expert evaluators.
  • Consolidate the list of expert evaluators.
What is obtained?
  • ④ List of evaluators.
Table A6. Participants Selection Specification.
Table A6. Participants Selection Specification.
S1.4 Participants Selection
Search and selection of participants with ASD and their tutors.
What do I need to get started?
  • ① System, product, or service information.
What to do?
  • Define target users.
  • Search for participants with ASD.
    Have the permission of the guardians if necessary.
  • Search and list tutors, if needed.
    It is recommended that they are people close to the participants with ASD.
  • Collect the information obtained from the participants.
  • Consolidate the list of participants.
What is obtained?
  • ⑤ List of participants.
Table A7. Execution Stage Specification.
Table A7. Execution Stage Specification.
S2: Execution Stage
Execution of selected evaluation methods.
What do I need to get started?
  • ② List of methods to execute.
  • ③ Experiment Design Document.
  • ④ List of evaluators.
  • ⑤ List of participants.
What to do?
  • Collect information obtained in the planning stage.
  • Evaluators training.
  • Execute the preliminary evaluation.
  • Execute the inspection methods.
  • Execute user tests.
  • Document the results obtained in each of the evaluations.
What is obtained?
  • ⑥ Results of the execution of the preliminary evaluation.
  • ⑦ Results of the execution of the inspection method(s).
  • ⑧ Results of the execution of the user test(s).
Table A8. Property Checklist Specification.
Table A8. Property Checklist Specification.
Property Checklist
Input
  • ③ Experiment Design Document:
    Goals.
    Protocol.
    Scenarios (Optional).
    Tasks (Optional).
    Expected results.
  • Checklist tool/s to use.
  • ④ List of evaluators.
Execution Step
  • For further details and specification of the method refer to study [39].
  • We recommend that the evaluators must indicate compliance with each of the items on the checklist provided and may also add comments for each item or in general.
Output
  • Percentage of system, product, or service satisfaction for people with ASD *.
    Percentage of satisfaction per category ** = (average score per category/maximum score to be achieved) × 100.
    Total satisfaction percentage = average percentage of satisfaction of all categories.
  • System Perception Questionnaire.
    Comments and/or recommendations of the evaluators.
  • Preliminary questionnaire (Demographic).
* The percentage of satisfaction of the system considers that each item of the property checklist will be evaluated on a Likert scale, as proposed in our property checklist adaptation [15].
** It is proposed to categorize the items of the property checklist [15], to obtain an overview of the successes and failures in the design of the evaluated system, product or services.
Table A9. Group-Based Expert Walkthrough Specification.
Table A9. Group-Based Expert Walkthrough Specification.
Group-Based Expert Walkthrough
Input
  • ③ Experiment Design Document:
    Goals.
    Protocol.
    Scenarios.
    Tasks.
    Expected results.
  • ④ List of evaluators.
Execution Step
  • For further details and specification of the method refer to the study [40].
  • The author of the group-based expert walkthrough method only proposes the assignment of the severity scale [40]. For the purposes of the methodology, we recommend using the frequency and criticality scales.
Output
  • List of potential problems.
    Details about severity, frequency and criticality can be found in Table A3.
  • List of tasks performed.
    Document completed by the evaluators.
  • System Perception Questionnaire.
    Comments and/or recommendations of the evaluators.
  • Preliminary questionnaire (Demographics).
Table A10. Perspective-Based Inspection Specification.
Table A10. Perspective-Based Inspection Specification.
Perspective-Based Inspection
Input
  • ③ Experiment Design Document:
    Goals.
    Protocol.
    Scenarios.
    Tasks.
    Expected results.
  • ④ List of evaluators.
Execution Step
  • For further details and specification of the method refer to the study [41].
  • The authors of the perspective-based inspection method only propose the assignment of the severity scale. For the purposes of the methodology, we recommend using the frequency and criticality scales.
Output
  • List of potential problems.
    There will be a list of problems for each perspective.
    Details about severity, frequency and criticality can be found in Table A3.
    The values of severity, frequency and critique are agreed by all the evaluators.
  • List of tasks performed.
    Document completed by each evaluator.
  • System Perception Questionnaire.
    Comments and/or recommendations of the evaluators.
  • Preliminary questionnaire (Demographics).
Table A11. Heuristic Evaluation Specification.
Table A11. Heuristic Evaluation Specification.
Heuristic Evaluation
Input
  • ③ Experiment Design Document:
    Goals.
    Protocol.
    Scenarios (Optional).
    Tasks (Optional).
    Expected results.
  • ④ List of evaluators:
  • Set of selected heuristics.
Execution Step
  • For further details and specification of the method, refer to study [9]. For the purposes of the methodology, we recommend using the frequency and criticality scales.
  • This stage can be a free exploration of the system, product or service or it can be guided by a set of tasks, depending on what is indicated in the execution plan.
Output
  • List of potential problems.
    Details about severity, frequency and criticality can be found in Table A3.
  • List of tasks performed *.
    Document completed by each evaluator.
  • System Perception Questionnaire.
    Comments and/or recommendations of the evaluators.
  • Preliminary questionnaire (Demographics).
* Depending on the execution plan, it may or may not be necessary.
Table A12. Field Observations Specification.
Table A12. Field Observations Specification.
Field Observations
Input
  • ③ Experiment Design Document:
    Goals.
    Protocol.
    Expected results.
  • ⑤ List of participants.
Execution Step
  • For further details and specification of the method refer to the study [39].
  • Consider the general and specific recommendations of the evaluation method presented in Section 5.2.3 (S2.3 User Tests).
Output
  • System Perception Questionnaire.
    Comments and/or recommendations from the participants and/or tutors. Tutors are only part of the experiment if needed.
  • Audiovisual and written records.
  • List of potential problems.
    Potential problems found by the evaluation observers.
  • Observers log.
    List of observations about the behavior and involvement of users during interaction with the system.
  • Confidentiality agreement.
Table A13. Controlled Observations Specification.
Table A13. Controlled Observations Specification.
Controlled Observations
Input
  • ③ Experiment Design Document:
    Goals.
    Protocol.
    Scenarios.
    Tasks.
    Expected results.
  • ⑤ List of participants.
Execution Step
  • For further details and specification of the method refer to the study [39].
  • Consider the general and specific recommendations of the evaluation method presented in Section 5.2.3 S2.3 User Tests.
Output
  • System Perception Questionnaire.
    Comments and/or recommendations from the participants and/or tutors. Tutors are only part of the experiment if needed.
  • Audiovisual and written records.
  • List of potential problems.
    Potential problems found by the evaluation observers.
  • Task list.
    Completed by the participants.
  • Observers log.
    List of observations about the behavior and involvement of users during interaction with the system.
    Performance measures of the tasks performed (time and number of tasks performed).
  • Confidentiality agreement.
  • Preliminary questionnaire (Demographics).
Table A14. Results Analysis Stage Specification.
Table A14. Results Analysis Stage Specification.
S3: Results Analysis Stage
Analysis and report of the results found.
What do I need to get started?
  • ⑥ Results of the execution of the preliminary evaluation.
  • ⑦ Results of the execution of the inspection method(s).
  • ⑧ Results of the execution of the user test(s).
What to do?
  • Group the results obtained from the evaluations carried out.
  • Quantitative analysis.
  • Qualitative analysis.
  • Create the UX evaluation report.
What is obtained?
  • ⑨ Results of quantitative analysis.
  • ⑩ Results of qualitative analysis.
  • ⑪ UX evaluation report.

References

  1. American Psychatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th ed.; American Psychiatric Publishing: Washington, DC, USA, 2013; pp. 50–59. [Google Scholar]
  2. Schopler, E.; Mesibov, G. Insider’s point of view. In High-Functioning Individuals with Autism; Springer: New York, NY, USA, 1992. [Google Scholar]
  3. Jasmin, E.; Couture, M.; Mckinley, P.; Reid, G.S.; Fombonne, E.; Gisel, E. Sensori-Motor and Daily Living Skills of Preschool Children with Autism Spectrum Disorders. Autism Dev. Disord. 2009, 39, 231–241. [Google Scholar] [CrossRef] [PubMed]
  4. Neely, L.; Ganz, J.; Davis, J.; Boles, M.; Hong, E.; Ninci, J.; Gilliland, W. Generalization and Maintenance of Functional Living Skills for Individuals with Autism Spectrum Disorder: A Review and Meta-Analysis. Autism Dev. Disord. 2016, 3, 37–47. [Google Scholar] [CrossRef]
  5. Leyfer, T.; Folstein, S.; Bacalman, S.; Davis, N.; Dinh, E.; Morgan, J.; Tager-Flusberg, H.; Lainhart, J. Comorbid Psychiatric Disorders in Children with Autism: Interview Development and Rates of Disorders. Autism Dev. Disord. 2006, 36, 849–861. [Google Scholar] [CrossRef] [PubMed]
  6. Valencia, K.; Rusu, C.; Quiñones, D.; Jamet, E. The Impact of Technology on People with Autism Spectrum Disorder: A Systematic Literature Review. Sensors 2019, 19, 4485. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Fernandez, A.; Insfran, E.; Abrahão, S. Usability evaluation methods for the web: A systematic mapping study. Inf. Softw. Technol. 2011, 53, 789–817. [Google Scholar] [CrossRef]
  8. Brooke, J. SUS: A “Quick Dirty” Usability Scale. In Usability Evaluation in Industry; Taylor and Francis: London, UK, 1996; pp. 189–194. [Google Scholar]
  9. Nielsen, J.; Molich, R. Heuristic Evaluation of User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing System, Seattle, WA, USA, 1–5 April 1990; pp. 249–256. [Google Scholar]
  10. Valencia, K.; Rusu, C.; Botella, F. User Experience Factors for People with Autism Spectrum Disorder. Appl. Sci. 2021, 11, 10469. [Google Scholar] [CrossRef]
  11. Valencia, K.; Rusu, C.; Botella, F. Preliminary Methodology to Evaluate the User Experience for People with Autism Spectrum Disorder. In Proceedings of the International Conference on Human-Computer Interaction, Online, 24–29 July 2021. [Google Scholar]
  12. International Organization for Standardization. Ergonomics of Human System Interaction—Part 210: Human-Centered Design for Interactive Systems; International Organization for Standardization: Geneva, Switzerland, 2019. [Google Scholar]
  13. International Organization for Standardization. Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts; International Organization for Standardization: Geneva, Switzerland, 2018. [Google Scholar]
  14. Vermeeren, A.; Lai-Chong, E.; Roto, V.; Obrist, M.; Hoonhout, J.; Väänänen-Vainio-Mattila, K. User experience evaluation methods: Current state and development needs. In Proceedings of the 6th Nordic Conference on HumanComputer Interaction: Extending Boundaries, New York, NY, USA, 16–20 October 2010; pp. 521–530. [Google Scholar]
  15. Valencia, K.; Botella, F.; Rusu, C. A Property Checklist to Evaluate the User Experience for People with Autism Spectrum Disorder. In Proceedings of the International Conference on Human-Computer Interaction, Online, 26 June–1 July 2022. [Google Scholar]
  16. Schmidt, M.; Schmidt, C.; Glaser, N.; Beck, D.; Lim, M.; Palmer, H. Evaluation of a spherical video-based virtual reality intervention designed to teach adaptive skills for adults with autism: A preliminary report. Interact. Learn. Environ. 2021, 29, 345–364. [Google Scholar] [CrossRef]
  17. Gentile, V.; Adjorlu, A.; Serafin, S.; Rocchesso, D.; Sorce, S. Touch or touchless? evaluating usability of interactive displays for persons with autistic spectrum disorders. In Proceedings of the 8th ACM International Symposium on Pervasive Displays (PerDis’ 19), New York, NY, USA, 12–14 June 2019. [Google Scholar]
  18. Francese, R.; Guercio, A.; Rossano, V.; Bhati, D. A Multimodal Conversational Interface to Support the creation of customized Social Stories for People with ASD. In Proceedings of the 2022 International Conference on Advanced Visual Interfaces (AVI 2022), Rome, Italy, 6–10 June 2022. [Google Scholar]
  19. Vallefuoco, E.; Purpura, G.; Gison, G.; Bonifacio, A.; Tagliabue, L.; Broggi, F.; Scuccimarra, G.; Pepino, A.; Nacinovich, R. A Multidisciplinary Telerehabilitation Approach for Supporting Social Interaction in Autism Spectrum Disorder Families: An Italian Digital Platform in Response to COVID-19. Brain Sci. 2021, 11, 1404. [Google Scholar] [CrossRef]
  20. Nuske, H.J.; Buck, J.E.; Ramesh, B.; Becker-Haimes, E.M.; Zentgraf, K.; Mandell, D.S. Making Progress Monitoring Easier and More Motivating: Developing a Client Data Collection App Incorporating User-Centered Design and Behavioral Economics Insights. Soc. Sci. 2022, 11, 106. [Google Scholar] [CrossRef]
  21. Hernández, P.; Molina, A.I.; Lacave, C.; Rusu, C.; Toledano-González, A. PlanTEA: Supporting Planning and Anticipation for Children with ASD Attending Medical Appointments. Appl. Sci. 2022, 12, 5237. [Google Scholar] [CrossRef]
  22. Ramos-Aguiar, L.R.; Álvarez-Rodríguez, F.J. Teaching Emotions in Children With Autism Spectrum Disorder Through a Computer Program With Tangible Interfaces. Rev. Iberoam. De Tecnol. Aprendiz. 2021, 16, 365–371. [Google Scholar] [CrossRef]
  23. Nielsen, J. NN/g Nielsen Norman Group. Available online: https://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/ (accessed on 22 August 2022).
  24. Camargo, M.C.; Carvalho, T.C.P.; Barros, R.M.; Barros, V.T.O.; Santana, M. Improving Usability of a Mobile Application for Children with Autism Spectrum Disorder Using Heuristic Evaluation. In Proceedings of the International Conference on Human-Computer Interaction, Orlando, FL, USA, 26–31 June 2019. [Google Scholar]
  25. Islam, M.N.; Bouwman, H. Towards user-intuitive web interface sign design and evaluation: A semiotic framework. Int. J. Hum. Comput. Stud. 2016, 86, 121–137. [Google Scholar] [CrossRef]
  26. Susanti, F.; Junaedi, D.; Effendy, V. Communication Learning User Interface Model for Children with Autism with the Goal-Directed Design Method. In Proceedings of the 7th International Conference on Information and Communication Technology (ICoICT), Kuala Lumpur, Malaysia, 24–26 July 2019. [Google Scholar]
  27. Khan, S.; Tahir, M.N.; Raza, A. Usability issues for smartphone users with special needs—Autism. In Proceedings of the International Conference on Open Source Systems and Technologies, Lahore, Pakistan, 16–18 December 2013. [Google Scholar]
  28. Ghabban, F.M.; Hajjar, M.; Alharbi, S. Usability Evaluation and User Acceptance of Mobile Applications for Saudi Autistic Children. Int. J. Interact. Mob. Technol. 2021, 15, 30–46. [Google Scholar] [CrossRef]
  29. Harrison, R.; Flood, D.; Duce, D. Usability of mobile applications: Literature review and rationale for a new usability model. J. Interact. Sci. 2013, 1, 1. [Google Scholar] [CrossRef] [Green Version]
  30. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef] [Green Version]
  31. Ahmed, S.; Deneke, W.; Mai, V.; Veneruso, A.; Stepita, M.; Dawson, A.; Hoefel, B.; Claeys, G.; Lam, N.; Sharmin, M. InterViewR: A Mixed-Reality Based Interview Training Simulation Platform for Individuals with Autism. In Proceedings of the IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 22 September 2020. [Google Scholar]
  32. Adiani, D.; Schmidt, M.; Wade, J.; Swanson, A.R.; Weitlauf, A.; Warren, Z.; Sarkar, N. InterViewR: Usability Enhancement and Functional Extension of a Digital Tool for Rapid Assessment of Risk for Autism Spectrum Disorders in Toddlers Based on Pilot Test and Interview Data. In Proceedings of the International Conference on Human-Computer Interaction, Orlando, FL, USA, 26–31 June 2019. [Google Scholar]
  33. Sarkar, A.; Wade, J.; Swanson, A.; Weitlauf, A.; Warren, Z.; Sarkar, N. A Data-Driven Mobile Application for Efficient, Engaging, and Accurate Screening of ASD in Toddlers. In Proceedings of the 12th International Conference on Universal Access in Human-Computer Interaction, Las Vegas, NV, USA, 15–20 July 2018. [Google Scholar]
  34. Kim, B.; Lee, D.; Min, A.; Paik, S.; Frey, G.; Bellini, S.; Han, K.; Shih, P.C. PuzzleWalk: A theory-driven iterative design inquiry of a mobile game for promoting physical activity in adults with autism spectrum disorder. PLoS ONE 2020, 15, e0237966. [Google Scholar] [CrossRef]
  35. Fonteyn, M.E.; Kuipers, B.; Grobe, S.J. A Description of Think Aloud Method and Protocol Analysis. Qual. Health Res. 1993, 3, 430–441. [Google Scholar] [CrossRef]
  36. Polson, P.G.; Lewis, C.; Rieman, J.; Wharton, C. Cognitive walkthroughs: A method for theory-based evaluation of user interfaces. Int. J. Man-Mach. Stud. 1992, 36, 741–773. [Google Scholar] [CrossRef]
  37. All about, UX. Information for User Experience Professionals. Available online: http://www.allaboutux.org/all-methods (accessed on 23 August 2022).
  38. Nielsen, J.; Landauer, T.K. A mathematical model of the finding of usability problems. In Proceedings of the INTERACT’ 93 and CHI’ 93 Conference on Human Factors in Computing Systems (CHI’ 93), Amsterdam, The Netherlands, 24–29 April 1993. [Google Scholar]
  39. Jordan, P.W. Designing Pleasurable Products. In An Introduction to the New Human Factors; Taylor & Francis: London, UK, 2000. [Google Scholar]
  40. Følstad, A. Group-based Expert Walkthrough. In Proceedings of the 3rd COST294-MAUSE International Workshop, Athens, Greece, 7 March 2007. [Google Scholar]
  41. Zhang, Z.; Basili, V.; Shneiderman, B. Perspective-based Usability Inspection: An Empirical Validation of Efficacy. Empir. Softw. Eng. 1999, 4, 43–69. [Google Scholar] [CrossRef]
  42. Khowaja, K.; Salim, S. Heuristics to evaluate interactive systems for children with Autism Spectrum Disorder (ASD). PLoS ONE 2015, 10, e0132187. [Google Scholar]
  43. Nielsen, J. NN/g Nielsen Norman Group. Available online: https://www.nngroup.com/articles/ten-usability-heuristics/ (accessed on 22 August 2022).
  44. Quiñones, D.; Rusu, C.; Rusu, V. A Methodology to Develop Usability/User eXperience Heuristics. Comput. Stand. Interfaces 2018, 59, 109–129. [Google Scholar] [CrossRef]
  45. Park, G.; Nanda, U.; Adams, L.; Essary, J.; Hoelting, M. Creating and Testing a Sensory Well-Being Hub for Adolescents with Developmental Disabilities. J. Inter. Des. 2020, 45, 13–32. [Google Scholar] [CrossRef]
  46. Thoren, A.; Quennerstedt, M.; Maivorsdotter, N. What Physical Education Becomes when Pupils with Neurodevelopmental Disorders are Integrated: A Transactional Understanding. Phys. Educ. Sport Pedagog. 2020, 26, 578–592. [Google Scholar] [CrossRef]
  47. TIBCO. TIBCO—What Is a Radar Chart? Available online: https://www.tibco.com/reference-center/what-is-a-radar-chart (accessed on 3 March 2022).
Figure 1. Overview of the process to create the methodology.
Figure 1. Overview of the process to create the methodology.
Applsci 12 11340 g001
Figure 2. Stages of the UX evaluation methodology for people with ASD.
Figure 2. Stages of the UX evaluation methodology for people with ASD.
Applsci 12 11340 g002
Figure 3. General description of the methodology.
Figure 3. General description of the methodology.
Applsci 12 11340 g003
Table 1. Results for factors F1, F2, F3 and F4.
Table 1. Results for factors F1, F2, F3 and F4.
F1—UtilityF2—ClarityF3—Ease of UseF4—Lack of Detail
AVGSDAVGSDAVGSDAVGSD
S1: Planning Stage4.820.504.360.733.730.772.591.26
S1.1: Method Execution Planning4.770.534.230.813.730.832.641.43
S1.2: Experiment Design4.820.504.410.733.550.962.321.36
S1.3: Evaluators Selection4.730.634.450.673.590.912.451.30
S1.4: Participants Selection4.770.534.270.703.361.092.731.39
S2: Execution Stage4.820.504.410.853.680.992.501.41
S2.1: Preliminary Evaluation4.640.584.450.803.910.922.411.37
S2.2: Inspections4.590.674.590.593.640.952.501.47
S2.3: User Tests4.860.474.500.673.181.142.551.37
S3: Results Analysis Stage4.910.294.410.854.000.932.181.33
S3.1: Quantitative Analysis4.680.574.360.903.911.022.231.38
S3.2: Qualitative Analysis4.770.434.360.854.000.932.181.30
S3.3: Integration of Results4.820.504.230.973.680.892.321.43
4.77 4.39 3.69 2.43
Table 2. Planning Stage Specification.
Table 2. Planning Stage Specification.
Q1—Intention of Use in Future EvaluationQ2—Completeness
Average4.323.77
Standard Deviation0.720.75
Table 3. Results of questions O1 and O2.
Table 3. Results of questions O1 and O2.
CommentHas Been Considered?Justification/Action
Previously train the participant with ASD, because eventually this new situation can cause stress.NoThe results of the tests with users can be biased if an induction is carried out beforehand.
Emphasis has been placed on providing clear and concise instructions before and during the experiment.
Add more detail in the user testing substage.YesGreater detail has been provided in substage S2.3, emphasizing the considerations that must be kept in mind when interacting with people with ASD.
Specify the number of participants, and if there will be a control group (with people without ASD) and/or an experimental group.YesIt has been detailed that the experiments should be carried out with people with ASD. It is recommended to have three or five participants with ASD [38].
Detail the faculties that the tutor will have during the tests with users.YesIt is detailed that the tutors must provide support to the participants, in case they are overwhelmed or do not understand the tasks to be carried out.
Table 4. Results of question O3.
Table 4. Results of question O3.
CommentHas Been Considered?Justification/Action
Consider possible problems in the estimated times for each planned task.YesThe suggestion was added as something to consider when planning the user tests.
Document if the participants have answered the preliminary and/or perception questionnaires with the support of the tutors or autonomously.YesThis has been included in substage S2.3.
Table 5. Results of question O4.
Table 5. Results of question O4.
CommentHas Been Considered?Justification/Action
Add a new stage, substage or product that details a possible “contingency plan”.NoWe believe that the detail provided is sufficient as a basis for how to act in adverse situations.
Specify the link of the methodology and evaluation methods with the characteristics of people with ASD and/or proposed UX factors.YesThe suggestion has been included.
The evaluation methods and the proposed UX factors [10] were selected/created based on the characteristics of people with ASD. It is recommended to particularize the instruments used in the evaluation methods for people with ASD.
Table 6. Results of question O5.
Table 6. Results of question O5.
CommentHas Been Considered?Justification/Action
The number of evaluators should be as proposed by Nielsen and Landauer [38].YesThe suggestion has been included in substage S1.3, evaluators selection.
The methodology should consider the dependency and/or autonomy of people with ASD.YesThis suggestion has been included in S1 planning stage.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Valencia, K.; Rusu, C.; Botella, F.; Jamet, E. A Methodology to Evaluate User Experience for People with Autism Spectrum Disorder. Appl. Sci. 2022, 12, 11340. https://doi.org/10.3390/app122211340

AMA Style

Valencia K, Rusu C, Botella F, Jamet E. A Methodology to Evaluate User Experience for People with Autism Spectrum Disorder. Applied Sciences. 2022; 12(22):11340. https://doi.org/10.3390/app122211340

Chicago/Turabian Style

Valencia, Katherine, Cristian Rusu, Federico Botella, and Erick Jamet. 2022. "A Methodology to Evaluate User Experience for People with Autism Spectrum Disorder" Applied Sciences 12, no. 22: 11340. https://doi.org/10.3390/app122211340

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop