Next Article in Journal
Overcoming Adverse Conditions in Rescue Scenarios: A Deep Learning and Image Processing Approach
Previous Article in Journal
Skin Lesion Classification Using Hybrid Convolutional Neural Network with Edge, Color, and Texture Information
Previous Article in Special Issue
Design and Assessment of an Interactive Role-Play System for Learning and Sustaining Traditional Glove Puppetry by Digital Technology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identification of Challenges and Best Practices for Including Users with Disabilities in User-Based Testing

1
Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, 2000 Maribor, Slovenia
2
ETS Ingenieros Informáticos, Universidad Politécnica de Madrid, 28040 Madrid, Spain
3
Department of Information Technology, Turiba University, Graudu Street 68, LV-1058 Riga, Latvia
4
Institute of Information Technology, Riga Technical University, LV-1048 Riga, Latvia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(9), 5498; https://doi.org/10.3390/app13095498
Submission received: 2 April 2023 / Revised: 21 April 2023 / Accepted: 24 April 2023 / Published: 28 April 2023
(This article belongs to the Special Issue Human‑Computer Interaction: Designing for All)

Abstract

:
Despite efforts to promote digital inclusion for all, individuals with disabilities continue to experience a significant digital divide. Developing usable and accessible solutions also demands conducting user-based testing with real end users, particularly for users with disabilities, to detect and address real problems and barriers they experience and improve usability and accessibility based on their feedback through a user-centered approach. When including users with disabilities in user testing, additional attention must be paid to ensure that the testing is planned and carried out in such a way as to enable the successful and efficient integration of users with disabilities. In doing so, it is necessary to consider various restrictions related to users’ disabilities. By conducting a systematic literature review, we collected and analyzed the challenges of including users with disabilities in user-based testing and best practices that researchers can apply in future user-based testing with users with disabilities. In the existing literature, a positive trend toward publishing articles describing testing with users with disabilities is noticeable. There is also an apparent need for more reporting on some phases of the testing studies. The main result of this study is a list of challenges and best practices that are important in the different phases of user-based testing with users with disabilities.

1. Introduction

Disability is an umbrella term for impairments, activity limitations, and participation restrictions. It denotes the negative aspects of the interaction between a person’s health condition(s) and that individual’s environmental and personal factors [1]. Persons with disabilities include those who have long-term physical, mental, intellectual, or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others [2].
According to the World Health Organization’s estimation, 1.3 billion people, representing 16% of the global population, experience a significant disability. The number is growing because of increased noncommunicable diseases and people living longer [3]. Mobility disability, the most common disability, affects 1 in 7 adults [4]. In 2019, it was estimated that 970 million people were living with a mental disorder [5]. Vision impairment affects the quality of life of nearly 2.2 billion people, who often have lower workforce participation and productivity rates and higher rates of depression and anxiety [6,7]. By 2050, nearly 2.5 billion people will have some degree of hearing loss, and at least 700 million will require hearing rehabilitation [8].
Different impairments affect many aspects of a person’s life including communication and speech, cognition, education, employment, etc. For example, persons with brain injury, locked-in syndrome, cerebral palsy, muscular dystrophy, or amyotrophic lateral sclerosis may have limited limb movement ability [9]. People with neuromuscular problems usually lose a degree of autonomy in their daily activities [10]. People with severe disabilities such as amyotrophic lateral sclerosis, motor neuron diseases, cerebral palsy, stroke, and spinal cord injury with intubation have different degrees of communication problems [11]. With ageing, disability becomes more common, affecting around two in five adults aged 65 and older. The number of older people prone to various diseases that impact their effective use of information communication technologies (ICT) is expected to double by 2050 [12].
Disability has started to gain momentum as a priority issue in international cooperation for inclusive development [13]. In the 2030 Agenda, disability is recognized as a cross-cutting issue and addressed in the following Sustainable Development Goals [14]: Goal 4—Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all; Goal 8—Promote sustained, inclusive, and sustainable economic growth, full and productive employment, and decent work for all; Goal 10—Reduce inequality within and among countries; Goal 11—Make cities and human settlements inclusive, safe, resilient, and sustainable; and Goal 17—Strengthen the means of implementation and revitalize the Global Partnership for Sustainable Development.
Although there is a tremendous effort in enabling digital inclusion to ensure that every individual has access to information and communication technologies (ICT), people with disabilities are still facing a significant digital divide [15]. Inclusive design is essential to creating products that target a diverse audience. User Interface (UI) designers face different challenges while designing UIs for interactive systems due to the heterogeneity, which can also result from a diversity of end users and interaction modalities [16]. Designers and developers of adaptive applications with UIs able to adapt interaction contents and information processing modes according to users’ needs and disabilities especially face significant difficulties in implementing such applications that meet the dynamic of their environment [17].
The successful development of usable and accessible solutions demands the design and conducting of usability studies. Usability evaluation can ensure that interactive systems (i.e., software, a website, or any information and communication technology or service) are adapted to the users and their tasks and that there are no adverse outcomes of their usage [18]. In user-based testing, it is very important to include real end users because only with their feedback can the development process result in applications that best serve the interests of end users. Evaluation by real users is the most valuable technique because it enables the detection of real problems and barriers that users experience while using the solution [19].
In general, when designing and conducting usability testing, it is recommended to follow principles and guidelines provided by existing standards. For applying interaction principles and the general design recommendations for interactive systems, ISO 9241-110:2020 [20] can be followed. A classification of evaluation techniques and guidelines for preparing an evaluation report based on a common industry format and the selected evaluation approach(es) are provided by ISO/IEC 25066:2016 [21]. The standard ISO 9241-210:2019 [22] provides requirements and recommendations for HCD principles and activities throughout the life cycle of computer-based interactive systems.
In the existing literature, researchers often report practical experiences and some general guidelines on conducting user-based testing. However, guidelines or recommendations for successful user-based testing with users with disabilities are missing in the existing literature. Therefore, user-based testing with users with disabilities calls for research and activities that will establish appropriate frameworks and guidelines for the effective inclusion of such users as well as equal opportunities for users with different types of disabilities. To the best of our knowledge, this study is the first attempt at (1) the identification of the most common barriers when including users with disabilities in user-based testing and (2) the identification of good practices that need to be followed in the design and execution of user-based testing that involves users with disabilities. The main contribution of this study is a list of challenges and best practices for involving users with disabilities in user-based testing that have been documented in existing scientific literature. The result of our research can be used to identify shortcomings in the existing literature in this field. The results of this study also provide the basis for building a catalogue of patterns of adequate inclusion of users with barriers in user testing.

2. Backgrounds

2.1. Human-Centered Design (HCD) for Usable and Accessible Solutions

The effective way to achieve high usability in modern solutions is to incorporate human-centered design (HCD) into the development process [23]. HCD “is an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques. This approach enhances effectiveness and efficiency, improves human well-being, user satisfaction, accessibility and sustainability; and counteracts possible adverse effects of use on human health, safety and performance.” [22] In the HCD process, we should follow the following principles specified by ISO 9241-210 [22,24]: (1) understand the user, the task and environmental requirements; (2) encourage the early and active involvement of users; (3) be driven and refined by user-centered evaluation; (4) include iteration of design solutions; (5) address the whole user experience; (6) encourage multi-disciplinary design.
Usability, accessibility, user experience (UX), and HCD have become essential issues to guarantee the quality and success of a software project [25]. When developing solutions tailored to the needs of users with disabilities, the user-centered approach allows understanding the experiences of such users, and based on the results, the usability, UX, and accessibility of these can be improved [26]. Usability is defined by the ISO 9241-11 ergonomics of human–system interaction—Part 11: Usability: Definitions and concepts as [27] “extent to which specified users can use a system, product or service to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”. UX, on the other hand, is defined as “user’s perceptions and responses that result from the use and/or anticipated use of a system, product or service. Users’ perceptions and responses include the users’ emotions, beliefs, preferences, perceptions, comfort, behaviours, and accomplishments that occur before, during and after use.” [22] Accessibility is defined as “extent to which products, systems, services, environments and facilities can be used by people from a population with the widest range of user needs, characteristics and capabilities to achieve identified goals in identified contexts of use” [27].

2.2. User-Based Testing with Users with Disabilities

One of the most widely used methods that involves end users is usability testing, also called user-based testing, in which users perform real tasks with a prototype or real system [23], usually using one or more specific UIs. In the iterative user-centered design process, end-user participation is essential for applying techniques such as observation of users, performance-related measurements, questionnaires, interviews, and thinking aloud [28]. Especially when targeting users with disabilities, when it comes to user-based testing, it is essential to involve such users in the process to ensure that the product is usable and accessible to everyone. Usability assessment has been recognized as a critical success factor in developing usable interactive systems with increased productivity, reduced errors, reduced need for user training and user support, and improved acceptance by their users [29].
User-based testing with users with disabilities is paramount for investigating interaction and other difficulties with the product and enhancing the accessibility of a product [30]. Moreover, the expertise of each user, the configuration of the system, and the assistive products [31] utilized by the user are just a few of the variables that can determine whether the user manages to overcome a potential barrier or not [19]. Users can also be asked to explore the product freely while their behaviors are observed and recorded to identify design flaws that cause user errors or difficulties [18]. During the session, a researcher or more researchers observe the participant’s behavior and listen for feedback, enabling different variables to be measured (e.g., task completion time, the accuracy of execution, navigation behavior, etc.). Based on the qualitative and quantitative data analysis, the product’s usability can be improved. The participatory assessment methods with real end users who employ a UI as they work through task scenarios and give feedback provide better insight into the underlying causes of system usability problems users encounter [29].
When user-based testing aims to assess the solution’s performance in terms of the speed of performing tasks or errors when using the solution, it does not matter which users we include. Existing research has shown that performance of users with disabilities while testing a product can be very close to the performance of users without disabilities [26,32,33]. However, when analyzing and comparing the benefits of products for users with disabilities, the results between users with disabilities and users without disabilities can differ significantly. The difference could be attributed to the greater utilization of new technology’s advantages to people with disabilities, while those without disabilities take less advantage of the benefits or do not even need them as much. For example, the study by Giudice et al. demonstrated that older adults who are blind or have a visual disability and experience difficulties in navigating could significantly benefit from a navigation system, and evaluating older participants with a visual disability is important, as the majority of vision loss is related to age [32].

3. Materials and Methods

We conducted a systematic literature review (SLR) to identify state-of-the-art research in the field of end-user testing with users with disabilities and to observe trends in research. We followed the guidelines prepared by Kitchenham [34] (see Figure 1).
Five researchers were involved in the SLR process, with research activities divided into five phases, as shown in Figure 2. In Phase 1, the researchers jointly defined and agreed on defined research questions (see Table 1), and developed the review protocol. In the review protocol, the researchers specified the query string for searching the relevant literature (see Section 3.2) in selected data libraries (see selected libraries in Table 2). The researchers jointly defined and agreed on defined criteria based on which we decided to include or exclude the found article for further reading (see Table 3 for inclusion criteria and Table 4 for exclusion criteria). Additionally, in the research protocol, the researchers agreed on four quality criteria (see Table 5 for quality criteria) for selecting articles with quality written content enabling data extraction based on this study’s objectives. The researchers jointly specified data fields and developed a data extraction form in the online tool for the SLR process.
In Phase 2, articles were identified based on a research string in online databases, and primary studies were selected for further analysis using the stepwise approach defined in the selection process (see Section 3.2). The selection of primary studies was first filtered with screening of the papers by reading the title and the abstract. All articles selected for further reading based on the inclusion criteria were additionally evaluated with quality criteria jointly defined and confirmed by the researchers.
In Phase 3, the researchers read the selected articles and extracted data from primary studies using the designed data extraction forms (see Section 3.3). Each researcher reviewed their part of the articles, whereby the number of articles was divided equally among all researchers. In Phase 4, the extracted data were analyzed and interpreted according to the research questions. The research protocol was concluded in Phase 5 with writing activities.
The reliability and quality of the SLR process were provided with the help of cross-checking in different phases. In Phase 2, each researcher evaluated two random papers, also assigned to other researchers, with altogether eight papers per researcher, and cross-checked if the quality criteria were correctly applied. In Phase 3, the cross-checking activity was repeated, and each researcher evaluated one random paper, assigned to other researchers, and cross-checked if the quality of extracted data was reasonable. Access to the results of all researchers was possible through a web tool used for the literature review process. The online tool for conducting a systematic literature review used in this study was Parsifal [35], designed to support researchers in performing systematic literature reviews within the context of software engineering. The Parsifal tool enables geographically distributed researchers to collaborate by providing a shared workspace, designing the protocol, and conducting the research. This tool provided us with a shared repository of all selected articles and access to all data throughout the systematic literature review process. Since no issues were identified during cross-checking, all researchers were well coordinated in understanding the defined procedures in the protocol and shared the same view on the content and quality of the data.

3.1. Phase 1: Definition of Research Questions and Review Protocol

The primary goal of this study is to identify, analyze, and synthesize existing work in the usability testing field. The main objective of this study is to (1) systematically review relevant scientific articles to conduct a SLR of the user-centered testing with users with disabilities field and to (2) present trends and demographic analysis of the research field. Based on the research goal, we formulated three main research questions RQ1, RQ2, and RQ3, as presented in Table 1.

3.2. Phase 2: Selecting the Primary Research

After identifying the research questions, we defined the appropriate keywords for finding all published articles with topics from user-centered testing with users with disabilities fields. As we wanted to provide a comprehensive overview of the research area, broad keywords were used. The elementary search query string used for finding published articles was the following:
(“user test*” OR “user evaluation”) AND (“UX” OR “user experience” OR “accessibility” OR “accessible” OR “barriers” OR “diversity” OR “usability”) AND (“disab*” OR “Down’s syndrome” OR “hard of hearing” OR “intellectual disability” OR “low dexterity” OR “low vision” OR “ASD” OR “autis*” OR “blind*” OR “deaf*” OR “dyslex*” OR “hemiplegia” OR “impairment” OR “paraplegia” OR “wheelchair”).
To find the relevant literature, we used the following established publicly available digital libraries: ACM Digital Library, IEEE Xplore, Scopus, and Web of Science. The first search conducted using the digital libraries yielded 1085 articles. The number of articles retrieved from selected digital libraries that were used as input into the next selection process steps is presented in Table 2. The most considerable number of articles was found in ACM Digital Library (562), followed by Scopus (255), IEEE Xplore (152), and Web of Science (116). To include relevant articles and exclude irrelevant ones, several inclusion (see the criteria specification in Table 3) and exclusion criteria (see Table 4) were applied. The last step in the selection process was to assess the quality of articles with the defined criteria (see Table 5). The article evaluated with at least 25 scores was selected as primary research and included in the future analysis.
We have followed several criteria for selecting studies for data extraction as follows (see Table 3). Since this study aimed to review challenges and best practices in user-based testing with users with disabilities, only articles that reported user-based testing of a product or service were included (I1). We searched for this information in the title, abstract, and full text. Since this review involved an international team of researchers, the inclusion criteria (I2) (the language) was set to prevent errors in data selection and extraction in case articles written in non-English language could prevent cross-checking and comparison between individual researchers. The inclusion criteria (I3) was set to include only articles accessible and downloadable in full-length through digital libraries. Final inclusion criteria (I4) was set for selecting articles published in journals that usually comprehensively describe studies performed. Although conference papers may be shorter and often only provide some information in detail compared to journal papers, there is a possibility of also finding quality papers published in conference proceedings. Therefore, at this stage, we decided to also include articles published in conference proceedings. However, before selected papers were finally reviewed for data extraction, they were assessed with the defined quality criteria (see Table 5).
We have also followed several criteria (see Table 4) for excluding identified studies from data extraction as follows. With the exclusion criteria E1, the research team decided to restrict the research to the last ten years (2012–2022). The main reason for that decision was to obtain a balanced inclusion of different types of disability. For instance, cognitive accessibility is an issue that only started to appear in research in the last few years. Next, because the same article can be indexed in different databases, and the search in different digital libraries often provides links to the same article, the results of the search were checked for duplicates, which were removed from the final list of selected articles (E2). In this study, we were interested in user-based testing, specifically in computer-based HCI. Therefore, all articles reporting studies of non-computer-related HCI were excluded (E3). The research team also agreed to exclude non-original articles that reported results from existing literature reviews or systematic mapping studies (E4). Finally, to be able to extract all data as specified in Section 3.3, studies for which only abstracts were accessible (E5) were excluded from the final list.
In the literature review, we searched for peer-reviewed articles published in journals and proceedings of international conferences. Such publications are expected to have gone through review procedures. However, we still wanted to carry out an additional analysis of the writing quality (Q1) of the articles found. We evaluated the identified articles according to the quality of the writing, checking whether the articles contained well-written essential elements such as an abstract, introduction, materials and methods, discussion, and conclusions from which it is possible to understand the data that interested us in our research effectively. Each article was also assessed based on the quality of the venue (Q2), wherein we checked whether the article was published in a journal with the impact factor as to whether it was in a conference proceeding. Venue quality was mainly based on inclusion in the Journal Citation Reports (for Journals), or the CORE ranking (for conferences). In each article we evaluated, we looked for whether the testing procedures were described and what the quality of the procedure description was. We were interested in whether the description process described different phases such as preparation of tasks, recruitment, execution of the test, etc. (Q3). Since our research aimed to gather barriers and good practices in the inclusion of users with disabilities, the article needed to report the implementation of a study with the inclusion of such users (Q4). We were interested in those studies that, based on practical experience, could provide the challenges in including users with disabilities in the user-based testing and good practices based on practical experience.
The process of selecting the relevant literature was carried out in several steps (see Figure 3 and Table 6). In the second step, we limited the set of studies by applying the exclusion criteria E1 to include only studies published in 2012 or later, which resulted in a set of 852 studies. In the next step, removing 34 duplicate entries resulted in 818 articles used for the next step of the selection process. In the fourth step, the exclusion criteria (I1–I4 and E3–E5) were applied to exclude irrelevant studies. Based on titles, reading abstracts, and conclusion, the studies that address user testing of a product or service with people with disabilities were selected. Although the language criteria were applied already in the previous step, some articles we found had titles and abstracts in written English. However, the rest of the article was not written in English. Therefore, this step provided 134 articles published in a journal, conference proceeding, or a book section. The selection process was concluded with a quality assessment. Based on quality criteria, 84 articles were relevant and marked as primary research. The list of primary studies S1–S84 identified in the literature review is provided in Table A1 in Appendix A.

3.3. Phase 3: Data Extraction

To ensure that all primary studies would be analyzed consistently, a data extraction form was designed to record the information obtained from the primary studies accurately. In addition, the extraction form was used to ensure the quality collection of all information needed to address the review question. As presented in Table 7, the data collection form was divided into two sections. Section A provides the extraction of standard information of primary research, while Section B addresses the content data. Aggregation of data extraction from all primary studies provided relevant results for further analysis. The main results are presented in the following section.

4. Results

4.1. Trends and Demographics of the Literature within the Field

As illustrated in Figure 4, the number of published studies has been increasing in the last decade. A clear trend in increased interest in the observed field can be reported. Along with the increasing number of primary studies in recent years, we also noticed the change in the article types: a positive trend in the number of journal papers compared to other primary research types can be observed since 2016.
As shown in Figure 5, most (69%) papers reporting user-based testing studies involving users with disabilities have been published in journals. A total of 21% of articles were published in conference proceedings, and 10% of studies were published in books as a chapter.
We ranked the journals and conferences by the number of primary articles published to obtain a broader view of top venues for the literature reporting results of the user-based testing that included users with disabilities. Results presented in Table 8 indicate that research articles could be found in a broad spectrum of journals. Most primary studies have been published in Universal Access in the Information Society (8), ACM Transactions on Accessible Computing (7), Assistive Technology (4), and Disability and Rehabilitation: Assistive Technology Frontiers in Neuroscience (4). Seven papers presented at different conferences were published as a chapter in Lecture Notes in Computer Science. The conferences with the most published studies about user-based testing with users with disabilities are International Web for All Conference, Iberian Conference on Information Systems and Technologies (CISTI), International ACM SIGACCESS Conference on Computers and Accessibility, and CHI Conference on Human Factors in Computing Systems.

4.2. The Research Space of the Literature within the Field of User-Based Testing with Users with Disabilities

In the existing literature related to testing users with disabilities, we see that some groups of users with disabilities are much better represented than others (see Table 9). Most often, blind users were included in user testing (they appeared in 40 studies) and visually impaired users (we counted 33 occurrences). Users with intellectual disabilities (N = 12), deaf users (N = 7), and users with hearing disabilities (N = 5) were also frequently included. Additionally, in the existing literature, some studies had users with mobility impairments, namely wheelchair users (N = 7), users with quadriplegia (N = 5), users with hemiplegia (N = 5), and users with paraplegia (N = 3). Few studies included users with autism (N = 3), users with dyslexia (N = 1), and users with Down’s syndrome (N = 1). In 30 cases, we classified the type of disability with “others” according to the classification schema specified in the data extraction form. We did not find a study where it was not possible to define the type of disability of the users who were involved in user-based testing.
From more than half of the analyzed articles, we were able to extract information about the following activities: preparation of the prototype (41 articles reported this activity), the recruitment of users with disabilities (extracted from 51 articles), the implementation of testing (extracted from 61) and the implementation of post-test questionnaires (there were 51 articles providing the description) (see Figure 6). More than 30 articles described the preparation of test tasks and pre-test and post-evaluation questionnaires. Other activities (e.g., obtaining the consent of the ethics committee and welcoming) are rarely described in the articles. Considering that users with disabilities are invited to participate in these studies, we expect to see more papers reporting how ethics committee approvals were obtained.
Regarding the number of users involved in the testing, in existing user-based testing research, most often (N = 24), between 6 and 10 users with disabilities are involved in such studies (see Figure 7). Existing user testing studies generally involve a small number of users with disabilities (on average, 17). In 9 studies, up to 5 users with disabilities were included. A number between 11 and 15 users with disabilities participated was counted in 16 existing user-based testing studies. Between 16 and 20 disabled users participated in 8 such studies. There were 5 studies that included between 21 and 25 users with disabilities. We also found 6 studies where more than 40 users with disabilities were included, and in 6 cases, the number of users with disabilities participating was not reported.

4.3. The Challenges of Including Users with Disabilities in User-Based Testing

In the existing literature, the authors identified numerous challenges in user-based testing in general and especially when the user-based testing included users with disabilities. Among the most documented was the poor experience when performing testing online, which was often the case during the pandemic. The results were unreliable when conducting remote testing, where the users with disabilities commonly reported being confused and not having clear enough instructions as the primary source of the problem. In several cases, the user testing could only be completed with the help of caregivers, especially in instances where children and young adults with communication difficulties were involved. The complexity of tools and problems in setting up the devices to meet individual users’ needs were also reported, mainly in cases of people with visual impairment and people with postural issues including older adults who could not finish long-seated sessions. Cultural differences were also addressed in user-based testing cases with people with or without disabilities, and lastly, no compensation to users was noted as a more considerable barrier. All identified literature review challenges are reported and explained in the following table (see Table 10).

4.4. Best Practices for Successful Inclusion of Users with Disabilities in User Testing

There are many general guidelines available for user testing activities, mostly based on providing accessible material and an accessible testing environment as well as using inclusive and non-offensive language. The systematic literature review resulted in several specific pieces of advice and guidelines on how to implement user testing; some of them were repeated in several articles and are summarized in Table 11. To place them into the three phases of user testing, we classified the identified suggestions in good practices (1) before user testing, (2) during user testing, and (3) after user testing. The largest issue while extracting good practice was to consider the differences in disability types and consequently the different requirements of each type, making the creation of a list of good practices for everyone or all types of impairments almost impossible.
To understand how the identified good practices cover the challenges identified in the existing literature and to create a visual representation of the two data sets, we have prepared a table where we have mapped good practices with identified challenges where applying a good practice can help solve the individual challenge or part of the challenge. Three researchers performed the mapping process with the help of a Google spreadsheet. The same layout table of best practices and challenges was created, and all researchers had the task of identifying associations between two variables: the challenges and the best practices. In the cells where a good practice can be mapped to a challenge by the researcher’s opinion, researchers made a notation. Once all researchers evaluated the mapping possibilities, the results were compared. In cases where all researchers agreed on a connection between a challenge and a best practice, the mapping was automatically confirmed. In cases where researchers did not wholly agree on a link, a discussion followed, and as long as at least two researchers decided on a mapping, it was included. In cases where only one researcher identified the connection, the mapping was not included.
As can be seen from Table 12, applying certain individual practices enables solving various challenges in user-based testing with users with disabilities. Moreover, to solve individual challenges, it is also necessary to use different good practices. There are good practices that make it possible to solve several challenges. An example of such good practice is the involvement of experts in testing that involves users with disabilities. An expert in this field can help address various challenges in different user testing phases. At the same time, in the future, it will be necessary to address some challenges by searching for additional good practices, as currently, there are few practices that could be found in the existing literature that address the difficulties recognized.

5. Discussion

In this study, based on the synthesis of existing knowledge obtained through a systematic review and analysis of 84 studies, we provide (1) an overview of the latest research in the field of user testing, where users with disabilities were actively involved, (2) an overview of the various challenges that researchers faced when involving persons with disabilities, and (3) a review of good practices based on which researchers have successfully included users with disabilities in user testing studies. The needs and expectations of end users can differ significantly depending on the type of disability. As a result, the challenges and best practices that can be applied to users with a specific type of disability can also differ. Due to the lack of research in the existing literature, it is currently difficult to develop either general guidelines for the implementation of best practices for all types of disabilities or a list of best practices associated with a specific disability. For this reason, it will be necessary to wait for new research, which can be expected in the future, given the established trend of published research in the last years.
Even though the field of usability and user experience evaluation is already well-researched in the existing literature, there are a lack of studies in this field that would comprehensively address the various challenges associated with including users with disabilities in user testing. In usability or user experience evaluation studies, when involving users with disabilities, it is necessary to understand all the challenges related to the testing method and, of course, the characteristics and obstacles of such users. In the existing literature, we observe a positive growth of studies that include users with disabilities as active participants. This outcome is undoubtedly encouraging and consistent with the goals of various national and international directives, aiming to enable equality for all in all areas.
The results of our study showed that it would be necessary to improve the reporting of all user testing phases in scientific publications. User-based testing involves several phases, with different activities performed within the stages. For user-based testing with users with disabilities especially, planning and executing all steps well with attention to the requirements and characteristics of users’ disabilities is necessary. In the first phase, which is related to testing planning, it is essential to carry out various activities, for example, preparing tasks, preparing a prototype/solution to be tested, obtaining permission from the ethics committee, recruiting, and others. In the user testing phase, there are also several related activities (e.g., welcome, pre-testing questionnaire, testing, post-testing questionnaire) to which we must pay special attention because of users with disabilities. Additionally, in the phase when user testing is completed, activities related to included users with disabilities can still be carried out (e.g., activities of obtaining feedback after analysis).
For researchers who plan to conduct user-based testing with users with disabilities in the future, it is necessary to provide all the information needed for comprehensive planning and practical testing in the article. In the existing literature, for the time being, there is a noticeably worse reporting of some phases and activities of user-based testing with users with disabilities, which will undoubtedly have to be improved in the future. One such example is the very rarely described activity related to obtaining ethical approval for conducting the study. Since users with disabilities are invited to participate in these studies, we expect to see more papers reporting how ethics committee approvals were obtained and how user testing activities should be adapted accordingly, for example, involving a detailed review of research proposals to ensure that they meet ethical and legal standards, protecting the rights and well-being of research participants, and maintaining the integrity of the research itself.
In the existing literature, we found some studies where a more significant number of users with disabilities were included in testing procedures. However, in general, a smaller number of users participate in user testing studies. Our study showed this is the same for research where users with disabilities are included. Of course, the number of users who participate in testing depends on the type of testing, testing objectives, and other criteria based on which the required number of test takers is defined. In our study, we did not analyze and classify the types of testing in the analyzed studies.
In addition to a comprehensive description of the implementation of individual phases, the authors should document good practices that contributed to the successful inclusion of users with disabilities and the successful implementation of testing. Additionally, we expect that the researchers will provide all the challenges to avoid the repetition of mistakes in future user-based testing with users with disabilities.

5.1. Theoretical and Practical Implications

The theoretical implications of the results of this study, which will be interesting especially for researchers and practitioners, are in the record of challenges and good practices that still need to be documented in this way in the existing literature. The study results will provide researchers and practitioners with one place to look at challenges and good practices, based on which they can eliminate various factors in the planning phase when planning their user evaluation studies, which can negatively impact the successful implementation of user-based testing with users with disabilities.
The results of this study have several practical Implications for different stakeholders: (1) developers who design and develop interactive solutions, (2) testers who evaluate the usability and user experience of the product, and (3) end users with disabilities who use the solutions.
The first practical implication is related to planning and developing user-friendly, useful, and accessible solutions, where user testing is essential for enabling quality solutions. Developers will learn to improve the design and development of accessible solutions for all users—including those with disabilities. Especially for the user-based testing of the developed products, the results of this study will improve awareness about the challenges related to the inclusion of users with disabilities in end-user testing procedures. The results of this study can also help improve the testing procedures, as the knowledge provided will make practitioners better aware of good practices of active involvement in such studies. Next, the results of user testing with actively involved users with disabilities will give a better insight into all product problems related to the use of and interaction with the product. Consequently, the practical outcome will be better solutions for all users.
Another significant practical implication of the results of this study is enabling better opportunities for users with disabilities to actively participate also in user-based testing processes, which in the past, maybe also due to a lack of knowledge about how such studies need to be correctly planned and carried out, were not included. Consequently, this will also improve the possibility for users with disabilities for inclusion in all activities related to the user testing.

5.2. Limitations

As in all research, this study comes with a set of limitations. Although several digital libraries were used and five researchers were involved in searching for relevant literature, there is a chance that not all relevant studies were identified and included in the review after applying inclusion and exclusion criteria and quality assessment. The keywords used in searching for relevant literature provided studies that included users with various disability types. It is even possible that with the specified keywords, we missed some studies that included users with some specific disability type which was not included with the search string.
The time scope of research was set between 2012 and 2022. Although not much relevant research was conducted before the specified period (based on our research), there could exist papers with significant insight into the beginnings of user-based testing with users with disabilities. The articles in the reviewing or publishing process not available when writing this manuscript were also not included in the systematic literature review. There is a possibility that after the conclusion of this paper, new important articles will emerge in 2023. Classification of studies by type of disability was limited by a pre-defined list of disabilities in the form we used to collect data during the articles review. Based on the results, it would be necessary to redefine the list of types of disabilities because, according to our list, we could not classify 30 studies into 1 of the types of obstacles. In future studies, it will be necessary to prepare a better classification scheme to more effectively understand the representation of different user disabilities in the existing literature.
Within the framework of the identified challenges and best practices of involving users with disabilities in user-based testing, it is necessary to consider the limitations associated with the scope of studies published in the existing literature. The list of challenges and good practices in this study was constructed based on the data extracted from the reviewed papers only. The good practices identified in this study are not aligned with the challenges. At the time of writing this manuscript, the list of challenges is more significant, which means that further research will have to provide additional good practices to help overcome the challenges not well addressed in the existing literature yet.

5.3. Threats to Validity

There is a threat that the results of the data synthesis from reviewed literature miss some other challenges and best practices because of the probability that not all relevant primary studies were discovered during the literature search. The selection of the primary studies was guided by the set inclusion and exclusion criteria within the scope of the reported search string; therefore, it is possible that some articles were automatically excluded in the process.
There is a threat of bias in the studies that were selected for data extraction. We have performed cross-checking for randomly selected papers to decrease the bias in study selection. However, Cohen’s Kappa statistic should be used to reduce the bias and time spent in the study selection process [69].
Data extraction and classification of primary studies have been challenging due to the interdisciplinary characteristics of the field. Some subjectivity is implied in data extraction; therefore, if the study is repeated, minor variations could be expected. For this reason, we have attached the list of studies included in the review as Appendix A. To minimize the threats to internal validity related to the data extraction procedure, we used a predefined data extraction scheme, and the authors decided to discuss the problematic papers.

6. Conclusions

This study contributes to the body of research on accessible HCI, particularly in planning and conducting user-based testing that includes users with disabilities. This study synthesizes the main challenges related to the inclusion and active participation of end users with disabilities in user-based testing procedures. Based on the synthesis of knowledge extracted from 84 scientific papers identified in the existing literature, this study provides a list of good practices that should be followed when planning and conducting user-based testing with users with disabilities.
Analysis of the extracted data offered the following main findings of the study. In the existing literature, we observe a positive trend of growth in studies that include users with disabilities in the user testing process, which fills us with optimism that in the future, such users will be even better included in studies. Existing studies have primarily included users with visual disabilities and users with intellectual disabilities. Other groups with other disability types are somewhat less included, meaning future research needs more attention to include users with other disabilities such as mobility disabilities, hearing disabilities, and others.
The subsequent important finding is related explicitly to comprehensive reporting on the planning and implementation of all user testing phases. Based on a systematic literature review, we can identify deficiencies in the extensive reporting in scientific publications about the steps taken in the user testing process. In the existing articles, the researchers mostly report testing implementation, the procedures for recruiting users with disabilities, and the implementation of questionnaires after testing. However, there are far fewer comprehensively reported steps that are also very important for proper planning and execution of testing, namely the preparation of tasks to be performed by the user during testing, obtaining the ethics committee’s opinion, obtaining feedback after testing, and others. Future articles about user-based testing with persons with disabilities must report all activities carried out before and after the testing and be crucial for effectively including users with disabilities in the testing processes.
The results of this study provide a list of challenges and good practices that are important in the different phases of user testing: before, during, and after testing. Results of this study showed an opportunity for further systematic mapping research, the goal of which would be to prepare a systematic mapping of challenges and good practices according to the type of disability. We also see the possibility for future research, which will enable the confirmation or validation of identified challenges and good practices in cooperation with users with specific types of disability who have practical experience with active involvement in user-based testing. Based on the practical experience of users with disabilities in user-based testing, the list of good practices will be updated and supplemented with practices we could not identify through the systematic literature review.

Author Contributions

B.Š. proposed the idea, designed the review process, searched the literature, reviewed the literature, extracted data, conducted the data analysis, and wrote the manuscript. K.K. reviewed the literature, extracted data, conducted data analysis, and wrote and revised the first draft. L.M.-N. searched the literature, reviewed the literature, extracted data, and revised the first draft. J.P. searched the literature, reviewed the literature, extracted data, and reviewed the first draft of the manuscript. M.P. performed the literature review, extracted data, prepared the data for analysis, and wrote and revised the first draft. All authors proofread the paper. All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from the Slovenian Research Agency (Research Core Funding No. P2-0057). This research was conducted within the activities of the Introducing training on user Testing with people with disabilities into UX design and related higher education programmes (INTUX) project (2022-1-LV01-KA220-HED-000087964), which is a project co-funded by the Erasmus+ Programme of the European Union. The European Commission’s support for the production of this publication does not constitute an endorsement of the contents, which reflect the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors want to acknowledge the successful collaboration in the project activities with all INTUX partner organizations, including members from Funka, Turiba University, Universida Politecnica De Madrid, National Council of Disability Organizations of Slovenia (NSIOS), SUSTENTO (the Latvian umbrella organization for disabled persons organisations), Asociacion Autismo Sevilla, and University of Maribor. The authors would like to thank Susanna Laurin from Funka for her valuable help in the field of user testing. The authors also want to acknowledge contribution of all reviewers that carefully reviewed the manuscript and, with constructive comments, helped improve the quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Primary studies identified in the literature review.
Table A1. Primary studies identified in the literature review.
IDAuthorsYearJournal/Conference
S1Doush [42]2022Universal Access in the Information Society
S2Torrado, Jaccheri, Pelagatti, & Wold [36]2022Entertainment Computing
S3Silva, Freire, & Cardoso [51]2022Proceedings of the 19th International Web for All Conference
S4Nair, Olmschenk, Seiple, & Zhu [26]2022Assistive Technology
S5Vincent et al. [70]2022Disability and Rehabilitation: Assistive Technology
S6Alajarmeh [71]2022Universal Access in the Information Society
S7Lee, Hong, Jarjue, Mensah, & Kacorri [48]2022Proceedings of the 19th International Web for All Conference
S8Darin, Andrade, & Sánchez [68]2022International Journal of Human-Computer Studies
S9Królak & Zając [72]2022Universal Access in the Information Society
S10Fox et al. [73]2022JMIR mHealth and uHealth
S11Jain et al. [74]2022CHI Conference on Human Factors in Computing Systems
S12Ahmetovic, Bernareggi, Leporini, & Mascetti [75]2022ACM Transactions on Accessible Computing
S13Barbosa, Hayes, Kaushik, & Wang [76]2022ACM Transactions on Accessible Computing
S14Guasch et al. [50]2022Universal Access in the Information Society
S15Weir, Loizides, Nahar, Aggoun, & Pollard [43]2021Universal Access in the Information Society
S16Ito et al. [77]2021Disability and Rehabilitation: Assistive Technology
S17Goncu & Finnegan [52]2021A chapter in Lecture Notes in Computer Science
S18Lebrasseur et al. [78] 2021Assistive Technology
S19Apu et al. [79]20212021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI)
S20Cimmino, Pero, Ricciardi, & Wan [80]2021Pattern Recognition Letters
S21Chu, Biss, Cooper, Quan, & Matulis [44]2021JMIR Serious Games
S22Yeong, Thomas, Buller, & Moosajee [81]2021Journal of Medical Internet Research
S23Zhang et al. [82]2021Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems
S24Kulich, Bass, & Koontz [61]2020Assistive Technology
S25Leporini, Rossetti, Furfari, Pelagatti, & Quarta [83]2020ACM Transactions on Accessible Computing
S26Yeni, Cagiltay, & Karasu [84]2020Universal Access in the Information Society
S27Thorsen, Dalla Costa, Beghi, & Ferrarin [41]2020Frontiers in Neuroscience
S28Wesselman et al. [38]2020The Journal of Prevention of Alzheime’s Disease
S29Yi [85]2020Universal Access in the Information Society
S30Creed, Frutos-Pascual, & Williams [59]2020Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
S31Fogli, Arenghi, & Gentilin [46] 2020Multimedia Tools and Applications
S32Alonso-Virgos, Baena, & Crespo [86]20202020 15th Iberian Conference on Information Systems and Technologies (CISTI)
S33Husin & Lim [87]2020Disability and Rehabilitation: Assistive Technology
S34Summa et al. [88]2020Computer Methods and Programs in Biomedicine
S35Rocha, Paredes, Martins, & Barroso [89]2020Lecture Notes in Computer Science
S36Giudice et al. [32]2020ACM Transactions on Accessible Computing
S37Sato et al. [63] 2019ACM Transactions on Accessible Computing
S38Rocha, Gonçalves, Fernandes, Reis, & Barroso [90] 2019Expert Systems
S39Arrue, Valencia, Pérez, Moreno, & Abascal [19] 2019International Journal of Human–Computer Interaction
S40Efthimiou et al. [91]2019Technologies
S41Mattie, Wong, Leland, & Borisoff [54] 2019Disability and Rehabilitation: Assistive Technology
S42Guo, Kong, Rivera, Xu, & Bigham [92]2019Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
S43Šumak, Špindler, Debeljak, Heričko, & Pušnik [33]2019Journal of Biomedical Informatics
S44Wittich, Jarry, Morrice, & Johnson [93] 2018Optometry and Vision Science
S45Gonçalves, Rocha, Martins, Branco, & Au-Yong-Oliveira [30]2018Universal Access in the Information Society
S46Day, Jokisuu, & Smith [67] 2018Lecture Notes in Computer Science
S47Carvalho, Dias, Reis, & Freire [47]2018Proceedings of the 33rd Annual ACM Symposium on Applied Computing
S48Rossetti, Furfari, Leporini, Pelagatti, & Quarta [94]2018Procedia Computer Science
S49Smaradottir, Håland, & Martinez [56]2018Mobile Information Systems
S50Alonso-Virgós, Rodríguez Baena, Pascual Espada, & González Crespo [57]2018Sensors
S51Agulló, Matamala, & Orero [45]2018Hikma
S52Reichinger et al. [95]2018ACM Transactions on Accessible Computing
S53Kozlowski, Fabian, Lad, & Delgado [96]2017Archives of Physical Medicine and Rehabilitation
S54Käthner et al. [39]2017Frontiers in Neuroscience
S55Deems-Dluhy, Jayaraman, Green, Albert, & Jayaraman [66]2017PM&R
S56Sato et al. [62]2017Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility
S57Senan, Wan Ab Aziz, Othman, & Suparjoh [97]2017MATEC Web of Conferences
S58Zhang, Zhou, Uchidiuno, & Kilic [98]2017ACM Transactions on Accessible Computing
S59Filippi & Barattin [64]2017Lecture Notes in Mechanical Engineering
S60Pereira & Archambault [99]2016Lecture Notes in Computer Science
S61Rocha, Paredes, Barroso, & Bessa [100]2016Lecture Notes in Computer Science
S62Rocha, Reis, Rego, Moreira, & Faria [101]20162016 11th Iberian Conference on Information Systems and Technologies (CISTI)
S63Paulino et al. [102]2016Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion
S64Lopes et al. [58]2016IRBM
S65Mirri, Prandi, & Salomoni [103]20162016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC)
S66Morales-Villaverde, Caro, Gotfrid, & Kurniawan [104]2016Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility
S67Gamecho et al. [105]2015IEEE Transactions on Human-Machine Systems
S68Godinho, Condado, Zacarias, & Lobo [106]2015Behaviour & Information Technology
S69Savva, Petrie, & Power [107]2015Lecture Notes in Computer Science
S70Navarrete & Lujan-Mora [108]20152015 International Conference on Interactive Collaborative and Blended Learning (ICBL)
S71Aizpurua, Arrue, & Vigo [60]2015Computers in Human Behavior
S72Miralles et al. [53]2015Frontiers in ICT
S73Lee, Hong, An, & Lee [55]2014Service Business
S74Ivanchev, Zinke, & Lucke [109]2014Lecture Notes in Computer Science
S75Pascual, Ribera, Granollers, & Coiduras [49]2014Procedia Computer Science
S76Rodriguez-Sanchez, Moreno-Alvarez, Martin, Borromeo, & Hernandez-Tamames [110]2014Expert Systems with Applications
S77Zickler, Halder, Kleih, Herbert, & Kübler [24]2013Artificial Intelligence in Medicine
S78McDaniel, Viswanathan, & Panchanathan [111]20132013 IEEE International Conference on Multimedia and Expo (ICME)
S79Schroeter et al. [65]20132013 IEEE International Conference on Robotics and Automation
S80Keskinen, Heimonen, Turunen, Rajaniemi, & Kauppinen [40]2012Interacting with Computers
S81Roentgen, Gelderblom, & de Witte [112]2012Assistive Technology
S82Brizee, Sousa, & Driscoll [37]2012Computers and Composition
S83Fuglerud & Røssvoll [113]2012Universal Access in the Information Society
S84Hassell, James, Wright, & Litterick [114]2012Journal of Assistive Technologies

References

  1. World Health Organization. International Classification of Functioning, Disability and Health (ICF). Available online: https://www.who.int/classifications/international-classification-of-functioning-disability-and-health (accessed on 20 March 2023).
  2. Inter-Parliamentary Union. From Exclusion to Equality. Realizing the Rights of Persons with Disabilities. In Handbook for Parliamentarians; Inter-Parliamentary Union: New York, NY, USA, 2015; ISBN 9789210572651. [Google Scholar]
  3. World Health Organization. Disability. Available online: https://www.who.int/news-room/fact-sheets/detail/disability-and-health (accessed on 1 April 2023).
  4. Centers for Disease Control And Prevention CDC: 1 in 4 US Adults Live with a Disability. Available online: https://www.cdc.gov/media/releases/2018/p0816-disability.html (accessed on 1 April 2023).
  5. World Health Organization. Mental Disorders. Available online: https://www.who.int/news-room/fact-sheets/detail/mental-disorders (accessed on 1 April 2023).
  6. Xie, I.; Wang, S.; Saba, M. Studies on blind and visually impaired users in LIS literature: A review of research methods. Libr. Inf. Sci. Res. 2021, 43, 101109. [Google Scholar] [CrossRef]
  7. World Health Organization. Blindness and Vision Impairment. Available online: https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment (accessed on 1 April 2023).
  8. World Health Organization. Deafness and Hearing Loss. Available online: https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss (accessed on 1 April 2023).
  9. Soltani, S.; Mahnam, A. A practical efficient human computer interface based on saccadic eye movements for people with disabilities. Comput. Biol. Med. 2016, 70, 163–173. [Google Scholar] [CrossRef] [PubMed]
  10. Alonso, R.; Concas, E.; Reforgiato Recupero, D. An Abstraction Layer Exploiting Voice Assistant Technologies for Effective Human—Robot Interaction. Appl. Sci. 2021, 11, 9165. [Google Scholar] [CrossRef]
  11. Wu, C.-M.; Chen, Y.-J.; Chen, S.-C.; Yeng, C.-H. Wireless Home Assistive System for Severely Disabled People. Appl. Sci. 2020, 10, 5226. [Google Scholar] [CrossRef]
  12. Kekade, S.; Hseieh, C.-H.; Islam, M.M.; Atique, S.; Mohammed Khalfan, A.; Li, Y.-C.; Abdul, S.S. The usefulness and actual use of wearable devices among the elderly population. Comput. Methods Programs Biomed. 2018, 153, 137–159. [Google Scholar] [CrossRef] [PubMed]
  13. United Nations. Inclusive Development for Persons with Disabilities: Report of the Secretary-General; United Nations: Midtown Manhattan, NY, USA, 2018. [Google Scholar]
  14. United Nations. Transforming our World: The 2030 Agenda for Sustainable Development; United Nations: Midtown Manhattan, NY, USA, 2015. [Google Scholar]
  15. Duplaga, M. Digital divide among people with disabilities: Analysis of data from a nationwide study for determinants of Internet use and activities performed online. PLoS ONE 2017, 12, e0179825. [Google Scholar] [CrossRef]
  16. Marco, L.; Alonso, Á.; Quemada, J. An Identity Model for Providing Inclusive Services and Applications. Appl. Sci. 2019, 9, 3813. [Google Scholar] [CrossRef]
  17. Braham, A.; Khemaja, M.; Buendía, F.; Gargouri, F. A Hybrid Recommender System for HCI Design Pattern Recommendations. Appl. Sci. 2021, 11, 10776. [Google Scholar] [CrossRef]
  18. Bastien, J.M.C. Usability testing: A review of some methodological and technical aspects of the method. Int. J. Med. Inform. 2010, 79, e18–e23. [Google Scholar] [CrossRef]
  19. Arrue, M.; Valencia, X.; Pérez, J.E.; Moreno, L.; Abascal, J. Inclusive Web Empirical Studies in Remote and In-Situ Settings: A User Evaluation of the RemoTest Platform. Int. J. Human Comput. Interact. 2019, 35, 568–583. [Google Scholar] [CrossRef]
  20. ISO 9241-110:2020; Ergonomics of Human-System Interaction—Part 110: Interaction Principles. International Organization for Standardization: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/75258.html (accessed on 1 April 2023).
  21. ISO/IEC 25066:2016; Systems and Software Engineering—Systems and Software Quality Requirements and Evaluation (SQuaRE)—Common Industry Format (CIF) for Usability—Evaluation Report. International Organization for Standardization: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/63831.html (accessed on 1 April 2023).
  22. ISO 9241-210:2019; Ergonomics of Human-System Interaction—Part 210: Human-Centered Design for Interactive Systems. International Organization for Standardization: Geneva, Switzerland, 2023. Available online: https://www.iso.org/standard/77520.html (accessed on 1 April 2023).
  23. Nakić, J.; Kosović, I.N.; Franić, A. User-Centered Design as a Method for Engaging Users in the Development of Geovisualization: A Use Case of Temperature Visualization. Appl. Sci. 2022, 12, 8754. [Google Scholar] [CrossRef]
  24. Zickler, C.; Halder, S.; Kleih, S.C.; Herbert, C.; Kübler, A. Brain Painting: Usability testing according to the user-centered design in end users with severe motor paralysis. Artif. Intell. Med. 2013, 59, 99–110. [Google Scholar] [CrossRef]
  25. Cayola, L.; Macías, J.A. Systematic guidance on usability methods in user-centered software development. Inf. Softw. Technol. 2018, 97, 163–175. [Google Scholar] [CrossRef]
  26. Nair, V.; Olmschenk, G.; Seiple, W.H.; Zhu, Z. ASSIST: Evaluating the usability and performance of an indoor navigation assistant for blind and visually impaired people. Assist. Technol. 2022, 34, 289–299. [Google Scholar] [CrossRef] [PubMed]
  27. ISO 9241-11; Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts. International Organization for Standardization: Geneva, Switzerland, 2023. Available online: https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en (accessed on 1 April 2023).
  28. Andreoni, G. Investigating and Measuring Usability in Wearable Systems: A Structured Methodology and Related Protocol. Appl. Sci. 2023, 13, 3595. [Google Scholar] [CrossRef]
  29. Jaspers, M.W.M. A comparison of usability methods for testing interactive health technologies: Methodological aspects and empirical evidence. Int. J. Med. Inform. 2009, 78, 340–353. [Google Scholar] [CrossRef]
  30. Gonçalves, R.; Rocha, T.; Martins, J.; Branco, F.; Au-Yong-Oliveira, M. Evaluation of e-commerce websites accessibility and usability: An e-commerce platform analysis with the inclusion of blind users. Univers. Access Inf. Soc. 2018, 17, 567–583. [Google Scholar] [CrossRef]
  31. ISO 9999:2022; Assistive Products—Classification and Terminology. International Organization for Standardization: Geneva, Switzerland, 2023. Available online: https://www.iso.org/obp/ui/#iso:std:iso:9999:ed-7:v1:en (accessed on 1 April 2023).
  32. Giudice, N.A.; Guenther, B.A.; Kaplan, T.M.; Anderson, S.M.; Knuesel, R.J.; Cioffi, J.F. Use of an Indoor Navigation System by Sighted and Blind Travelers. ACM Trans. Access. Comput. 2020, 13, 1–27. [Google Scholar] [CrossRef]
  33. Šumak, B.; Špindler, M.; Debeljak, M.; Heričko, M.; Pušnik, M. An empirical evaluation of a hands-free computer interaction for users with motor disabilities. J. Biomed. Inform. 2019, 96, 103249. [Google Scholar] [CrossRef]
  34. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; University of Durham: Durham, UK, 2007. [Google Scholar]
  35. Simple Complex Parsifal. Available online: https://parsif.al/ (accessed on 21 April 2023).
  36. Torrado, J.C.; Jaccheri, L.; Pelagatti, S.; Wold, I. HikePal: A mobile exergame to motivate people with intellectual disabilities to do outdoor physical activities. Entertain. Comput. 2022, 42, 100477. [Google Scholar] [CrossRef]
  37. Brizee, A.; Sousa, M.; Driscoll, D.L. Writing Centers and Students with Disabilities: The User-centered Approach, Participatory Design, and Empirical Research as Collaborative Methodologies. Comput. Compos. 2012, 29, 341–366. [Google Scholar] [CrossRef]
  38. Wesselman, L.M.P.; Schild, A.K.; Hooghiemstra, A.M.; Meiberth, D.; Drijver, A.J.; Leeuwenstijn-Koopman, M.V.; Prins, N.D.; Brennan, S.; Scheltens, P.; Jessen, F.; et al. Targeting Lifestyle Behavior to Improve Brain Health: User-Experiences of an Online Program for Individuals with Subjective Cognitive Decline. J. Prev. Alzheimer’s Dis. 2020, 7, 184–194. [Google Scholar] [CrossRef]
  39. Käthner, I.; Halder, S.; Hintermüller, C.; Espinosa, A.; Guger, C.; Miralles, F.; Vargiu, E.; Dauwalder, S.; Rafael-Palou, X.; Solà, M.; et al. A Multifunctional Brain-Computer Interface Intended for Home Use: An Evaluation with Healthy Participants and Potential End Users with Dry and Gel-Based Electrodes. Front. Neurosci. 2017, 11, 286. [Google Scholar] [CrossRef]
  40. Keskinen, T.; Heimonen, T.; Turunen, M.; Rajaniemi, J.-P.; Kauppinen, S. SymbolChat: A flexible picture-based communication platform for users with intellectual disabilities. Interact. Comput. 2012, 24, 374–386. [Google Scholar] [CrossRef]
  41. Thorsen, R.; Dalla Costa, D.; Beghi, E.; Ferrarin, M. Myoelectrically Controlled FES to Enhance Tenodesis Grip in People With Cervical Spinal Cord Lesion: A Usability Study. Front. Neurosci. 2020, 14, 412. [Google Scholar] [CrossRef] [PubMed]
  42. Doush, I.A.; Al-Jarrah, A.; Alajarmeh, N.; Alnfiai, M. Learning features and accessibility limitations of video conferencing applications: Are people with visual impairment left behind. Univers. Access Inf. Soc. 2022, 1–16. [Google Scholar] [CrossRef]
  43. Weir, K.; Loizides, F.; Nahar, V.; Aggoun, A.; Pollard, A. I see therefore I read: Improving the reading capabilities of individuals with visual disabilities through immersive virtual reality. Univers. Access Inf. Soc. 2021, 1–27. [Google Scholar] [CrossRef]
  44. Chu, C.H.; Biss, R.K.; Cooper, L.; Quan, A.M.L.; Matulis, H. Exergaming Platform for Older Adults Residing in Long-Term Care Homes: User-Centered Design, Development, and Usability Study. JMIR Serious Games 2021, 9, e22370. [Google Scholar] [CrossRef]
  45. Agulló, B.; Matamala, A.; Orero, P. From Disabilities to Capabilities: Testing subtitles in immersive environments with end users. Hikma 2018, 17, 195–220. [Google Scholar] [CrossRef]
  46. Fogli, D.; Arenghi, A.; Gentilin, F. A universal design approach to wayfinding and navigation. Multimed. Tools Appl. 2020, 79, 33577–33601. [Google Scholar] [CrossRef]
  47. Carvalho, M.C.N.; Dias, F.S.; Reis, A.G.S.; Freire, A.P. Accessibility and usability problems encountered on websites and applications in mobile devices by blind and normal-vision users. In Proceedings of the 33rd Annual ACM Symposium on Applied Computing, Pau, France, 9–13 April 2018; ACM: New York, NY, USA, 2018; pp. 2022–2029. [Google Scholar]
  48. Lee, K.; Hong, J.; Jarjue, E.; Mensah, E.E.; Kacorri, H. From the lab to people’s home: Lessons from accessing blind participants’ interactions via smart glasses in remote studies. In Proceedings of the 19th International Web for All Conference, Lyon, France, 25–26 April 2022; ACM: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  49. Pascual, A.; Ribera, M.; Granollers, T.; Coiduras, J.L. Impact of Accessibility Barriers on the Mood of Blind, Low-vision and Sighted Users. Procedia Comput. Sci. 2014, 27, 431–440. [Google Scholar] [CrossRef]
  50. Guasch, D.; Martín-Escalona, I.; Macías, J.A.; Francisco, V.; Hervás, R.; Moreno, L.; Bautista, S. Design and evaluation of ECO: An augmentative and alternative communication tool. Univers. Access Inf. Soc. 2022, 21, 827–849. [Google Scholar] [CrossRef]
  51. Silva, J.S.R.; Freire, A.P.; Cardoso, P.C.F. When headers are not there: Design and user evaluation of an automatic topicalisation and labelling tool to aid the exploration of web documents by blind users. In Proceedings of the 19th International Web for All Conference, Lyon, France, 25–26 April 2022; ACM: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  52. Goncu, C.; Finnegan, D.J. ‘Did You See That!?’ Enhancing the Experience of Sports Media Broadcast for Blind People; Springer: Cham, Switzerland, 2021; pp. 396–417. [Google Scholar]
  53. Miralles, F.; Vargiu, E.; Rafael-Palou, X.; Solà, M.; Dauwalder, S.; Guger, C.; Hintermüller, C.; Espinosa, A.; Lowish, H.; Martin, S.; et al. Brain–Computer Interfaces on Track to Home: Results of the Evaluation at Disabled End-Users’ Homes and Lessons Learnt. Front. ICT 2015, 2, 25. [Google Scholar] [CrossRef]
  54. Mattie, J.; Wong, A.; Leland, D.; Borisoff, J. End user evaluation of a Kneeling Wheelchair with “on the fly” adjustable seating functions. Disabil. Rehabil. Assist. Technol. 2019, 14, 543–554. [Google Scholar] [CrossRef]
  55. Lee, S.M.; Hong, S.-G.; An, D.-H.; Lee, H.-M. Disability users’ evaluation of the web accessibility of SNS. Serv. Bus. 2014, 8, 517–540. [Google Scholar] [CrossRef]
  56. Smaradottir, B.F.; Håland, J.A.; Martinez, S.G. User Evaluation of the Smartphone Screen Reader VoiceOver with Visually Disabled Participants. Mob. Inf. Syst. 2018, 2018, 6941631. [Google Scholar] [CrossRef]
  57. Alonso-Virgós, L.; Rodríguez Baena, L.; Pascual Espada, J.; González Crespo, R. Web Page Design Recommendations for People with Down Syndrome Based on Users’ Experiences. Sensors 2018, 18, 4047. [Google Scholar] [CrossRef]
  58. Lopes, P.; Pino, M.; Carletti, G.; Hamidi, S.; Legué, S.; Kerhervé, H.; Benveniste, S.; Andéol, G.; Bonsom, P.; Reingewirtz, S.; et al. Co-Conception Process of an Innovative Assistive Device to Track and Find Misplaced Everyday Objects for Older Adults with Cognitive Impairment: The TROUVE Project. IRBM 2016, 37, 52–57. [Google Scholar] [CrossRef]
  59. Creed, C.; Frutos-Pascual, M.; Williams, I. Multimodal Gaze Interaction for Creative Design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; ACM: New York, NY, USA, 2020; pp. 1–13. [Google Scholar]
  60. Aizpurua, A.; Arrue, M.; Vigo, M. Prejudices, memories, expectations and confidence influence experienced accessibility on the Web. Comput. Human Behav. 2015, 51, 152–160. [Google Scholar] [CrossRef]
  61. Kulich, H.R.; Bass, S.R.; Koontz, A.M. Rehabilitation professional and user evaluation of an integrated push-pull lever drive system for wheelchair mobility. Assist. Technol. 2020, 1–9. [Google Scholar] [CrossRef]
  62. Sato, D.; Oh, U.; Naito, K.; Takagi, H.; Kitani, K.; Asakawa, C. NavCog3: An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 20 October–1 November 2017; ACM: New York, NY, USA, 2017; pp. 270–279. [Google Scholar]
  63. Sato, D.; Oh, U.; Guerreiro, J.; Ahmetovic, D.; Naito, K.; Takagi, H.; Kitani, K.M.; Asakawa, C. NavCog3 in the Wild: Large-scale Blind Indoor Navigation Assistant with Semantic Features. ACM Trans. Access. Comput. 2019, 12, 1–30. [Google Scholar] [CrossRef]
  64. Filippi, S.; Barattin, D. Involving Autism Spectrum Disorder (ASD) Affected People in Design; Springer: Cham, Switzerland, 2017; pp. 373–383. [Google Scholar]
  65. Schroeter, C.; Mueller, S.; Volkhardt, M.; Einhorn, E.; Huijnen, C.; van den Heuvel, H.; van Berlo, A.; Bley, A.; Gross, H.-M. Realization and user evaluation of a companion robot for people with mild cognitive impairments. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 1153–1159. [Google Scholar]
  66. Deems-Dluhy, S.L.; Jayaraman, C.; Green, S.; Albert, M.V.; Jayaraman, A. Evaluating the Functionality and Usability of Two Novel Wheelchair Anti-Rollback Devices for Ramp Ascent in Manual Wheelchair Users With Spinal Cord Injury. PM&R 2017, 9, 483–493. [Google Scholar]
  67. Day, P.; Jokisuu, E.; Smith, A.W.D. Accessible Touch: Evaluating Touchscreen PIN Entry Concepts with Visually Impaired People Using Tactile or Haptic Cues. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2018; pp. 327–334. [Google Scholar]
  68. Darin, T.; Andrade, R.; Sánchez, J. Usability evaluation of multimodal interactive virtual environments for learners who are blind: An empirical investigation. Int. J. Hum. Comput. Stud. 2022, 158, 102732. [Google Scholar] [CrossRef]
  69. Pérez, J.; Díaz, J.; Garcia-Martin, J.; Tabuenca, B. Systematic literature reviews in software engineering—Enhancement of the study selection process using Cohen’s Kappa statistic. J. Syst. Softw. 2020, 168, 110657. [Google Scholar] [CrossRef]
  70. Vincent, C.; Girard, R.; Dumont, F.; Archambault, P.; Routhier, F.; Mostafavi, M.A. Evaluation of satisfaction with geospatial assistive technology (ESGAT): A methodological and usability study. Disabil. Rehabil. Assist. Technol. 2022, 17, 134–151. [Google Scholar] [CrossRef]
  71. Alajarmeh, N. The extent of mobile accessibility coverage in WCAG 2.1: Sufficiency of success criteria and appropriateness of relevant conformance levels pertaining to accessibility problems encountered by users who are visually impaired. Univers. Access Inf. Soc. 2022, 21, 507–532. [Google Scholar] [CrossRef]
  72. Królak, A.; Zając, P. Analysis of the accessibility of selected massive open online courses (MOOCs) for users with disabilities. Univers. Access Inf. Soc. 2022. [CrossRef]
  73. Fox, S.; Brown, L.J.E.; Antrobus, S.; Brough, D.; Drake, R.J.; Jury, F.; Leroi, I.; Parry-Jones, A.R.; Machin, M. Co-design of a Smartphone App for People Living with Dementia by Applying Agile, Iterative Co-design Principles: Development and Usability Study. JMIR Mhealth Uhealth 2022, 10, e24483. [Google Scholar] [CrossRef]
  74. Jain, D.; Huynh Anh Nguyen, K.; Goodman, S.M.; Grossman-Kahn, R.; Ngo, H.; Kusupati, A.; Du, R.; Olwal, A.; Findlater, L.; Froehlich, J.E. ProtoSound: A Personalized and Scalable Sound Recognition System for Deaf and Hard-of-Hearing Users. In Proceedings of the CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 30 April–5 May 2022; ACM: New York, NY, USA, 2022; pp. 1–16. [Google Scholar]
  75. Ahmetovic, D.; Bernareggi, C.; Leporini, B.; Mascetti, S. WordMelodies: Supporting the Acquisition of Literacy Skills by Children with Visual Impairment through a Mobile App. ACM Trans. Access. Comput. 2022, 16, 1. [Google Scholar] [CrossRef]
  76. Barbosa, N.M.; Hayes, J.; Kaushik, S.; Wang, Y. “Every Website Is a Puzzle!”: Facilitating Access to Common Website Features for People with Visual Impairments. ACM Trans. Access. Comput. 2022, 15, 1–35. [Google Scholar] [CrossRef]
  77. Ito, K.; Uehara, S.; Yuasa, A.; Kim, C.M.; Kitamura, S.; Ushizawa, K.; Tanabe, S.; Otaka, Y. Electromyography-controlled gamified exercise system for the distal upper extremity: A usability assessment in subacute post-stroke patients. Disabil. Rehabil. Assist. Technol. 2021. [CrossRef]
  78. Lebrasseur, A.; Lettre, J.; Routhier, F.; Bouffard, J.; Archambault, P.S.; Campeau-Lecours, A. Evaluation of the usability of an actively actuated arm support. Assist. Technol. 2021, 33, 271–277. [Google Scholar] [CrossRef]
  79. Apu, F.S.; Joyti, F.I.; Anik, M.A.U.; Zobayer, M.W.U.; Dey, A.K.; Sakhawat, S. Text and Voice to Braille Translator for Blind People. In Proceedings of the 2021 International Conference on Automation, Control and Mechatronics for Industry 4.0 (ACMI), Rajshahi Bangladesh, 8–9 July 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–6. [Google Scholar]
  80. Cimmino, L.; Pero, C.; Ricciardi, S.; Wan, S. A method for user-customized compensation of metamorphopsia through video see-through enabled head mounted display. Pattern Recognit. Lett. 2021, 151, 252–258. [Google Scholar] [CrossRef]
  81. Yeong, J.L.; Thomas, P.; Buller, J.; Moosajee, M. A Newly Developed Web-Based Resource on Genetic Eye Disorders for Users With Visual Impairment (Gene. Vision): Usability Study. J. Med. Internet Res. 2021, 23, e19151. [Google Scholar]
  82. Zhang, X.; de Greef, L.; Swearngin, A.; White, S.; Murray, K.; Yu, L.; Shan, Q.; Nichols, J.; Wu, J.; Fleizach, C.; et al. Screen Recognition: Creating Accessibility Metadata for Mobile Applications from Pixels. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; ACM: New York, NY, USA, 2021; pp. 1–15. [Google Scholar]
  83. Leporini, B.; Rossetti, V.; Furfari, F.; Pelagatti, S.; Quarta, A. Design Guidelines for an Interactive 3D Model as a Supporting Tool for Exploring a Cultural Site by Visually Impaired and Sighted People. ACM Trans. Access. Comput. 2020, 13, 1–39. [Google Scholar] [CrossRef]
  84. Yeni, S.; Cagiltay, K.; Karasu, N. Usability investigation of an educational mobile application for individuals with intellectual disabilities. Univers. Access Inf. Soc. 2020, 19, 619–632. [Google Scholar] [CrossRef]
  85. Yi, Y.J. Web accessibility of healthcare Web sites of Korean government and public agencies: A user test for persons with visual impairment. Univers. Access Inf. Soc. 2020, 19, 41–56. [Google Scholar] [CrossRef]
  86. Alonso-Virgos, L.; Baena, L.R.; Crespo, R.G. Web accessibility and usability evaluation methodology for people with Down syndrome. In Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Sevilla, Spain, 4–27 June 2020; pp. 1–7. [Google Scholar]
  87. Husin, M.H.; Lim, Y.K. InWalker: Smart white cane for the blind. Disabil. Rehabil. Assist. Technol. 2020, 15, 701–707. [Google Scholar] [CrossRef]
  88. Summa, S.; Schirinzi, T.; Bernava, G.M.; Romano, A.; Favetta, M.; Valente, E.M.; Bertini, E.; Castelli, E.; Petrarca, M.; Pioggia, G.; et al. Development of SaraHome: A novel, well-accepted, technology-based assessment tool for patients with ataxia. Comput. Methods Programs Biomed. 2020, 188, 105257. [Google Scholar] [CrossRef] [PubMed]
  89. Rocha, T.; Paredes, H.; Martins, P.; Barroso, J. Tech-Inclusion Research: An Iconographic Browser Extension Solution. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2020; pp. 333–344. [Google Scholar]
  90. Rocha, T.; Gonçalves, C.; Fernandes, H.; Reis, A.; Barroso, J. The AppVox mobile application, a tool for speech and language training sessions. Expert Syst. 2019, 36, e12373. [Google Scholar] [CrossRef]
  91. Efthimiou, E.; Fotinea, S.-E.; Goulas, T.; Vacalopoulou, A.; Vasilaki, K.; Dimou, A.-L. Sign Language Technologies and the Critical Role of SL Resources in View of Future Internet Accessibility Services. Technologies 2019, 7, 18. [Google Scholar] [CrossRef]
  92. Guo, A.; Kong, J.; Rivera, M.; Xu, F.F.; Bigham, J.P. StateLens: A Reverse Engineering Solution for Making Existing Dynamic Touchscreens Accessible. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, New Orleans, LA, USA, 20–23 October 2019; ACM: New York, NY, USA, 2019; pp. 371–385. [Google Scholar]
  93. Wittich, W.; Jarry, J.; Morrice, E.; Johnson, A. Effectiveness of the Apple iPad as a Spot-reading Magnifier. Optom. Vis. Sci. 2018, 95, 704–710. [Google Scholar] [CrossRef]
  94. Rossetti, V.; Furfari, F.; Leporini, B.; Pelagatti, S.; Quarta, A. Enabling Access to Cultural Heritage for the visually impaired: An Interactive 3D model of a Cultural Site. Procedia Comput. Sci. 2018, 130, 383–391. [Google Scholar] [CrossRef]
  95. Reichinger, A.; Carrizosa, H.G.; Wood, J.; Schröder, S.; Löw, C.; Luidolt, L.R.; Schimkowitsch, M.; Fuhrmann, A.; Maierhofer, S.; Purgathofer, W. Pictures in Your Mind: Using Interactive Gesture-Controlled Reliefs to Explore Art. ACM Trans. Access. Comput. 2018, 11, 1–39. [Google Scholar] [CrossRef]
  96. Kozlowski, A.J.; Fabian, M.; Lad, D.; Delgado, A.D. Feasibility and Safety of a Powered Exoskeleton for Assisted Walking for Persons With Multiple Sclerosis: A Single-Group Preliminary Study. Arch. Phys. Med. Rehabil. 2017, 98, 1300–1307. [Google Scholar] [CrossRef]
  97. Senan, N.; Wan Ab Aziz, W.A.; Othman, M.F.; Suparjoh, S. Embedding Repetition (Takrir) Technique in Developing Al-Quran Memorizing Mobile Application for Autism Children. Proc. MATEC Web Conf. 2017, 135, 00076. [Google Scholar] [CrossRef]
  98. Zhang, D.; Zhou, L.; Uchidiuno, J.O.; Kilic, I.Y. Personalized Assistive Web for Improving Mobile Web Browsing and Accessibility for Visually Impaired Users. ACM Trans. Access. Comput. 2017, 10, 1–22. [Google Scholar] [CrossRef]
  99. Pereira, L.S.; Archambault, D. Understanding How People with Cerebral Palsy Interact with the Web 2.0. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; pp. 239–242. [Google Scholar]
  100. Rocha, T.; Paredes, H.; Barroso, J.; Bessa, M. SAMi: An Accessible Web Application Solution for Video Search for People with Intellectual Disabilities. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; pp. 310–316. [Google Scholar]
  101. Rocha, R.; Reis, L.P.; Rego, P.A.; Moreira, P.M.; Faria, B.M. New forms of interaction in serious games for cognitive rehabilitation: Implementation and usability study. In Proceedings of the 2016 11th Iberian Conference on Information Systems and Technologies (CISTI), Gran Canaria, Spain, 15–18 June 2016; pp. 1–6. [Google Scholar]
  102. Paulino, D.; Amaral, D.; Amaral, M.; Reis, A.; Barroso, J.; Rocha, T. Professor Piano: A music application for people with intellectual disabilities. In Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, Online, 1–3 December 2016; ACM: New York, NY, USA, 2016; pp. 269–274. [Google Scholar]
  103. Mirri, S.; Prandi, C.; Salomoni, P. Personalizing Pedestrian Accessible way-finding with mPASS. In Proceedings of the 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 9–12 January 2016; pp. 1119–1124. [Google Scholar]
  104. Morales-Villaverde, L.M.; Caro, K.; Gotfrid, T.; Kurniawan, S. Online Learning System to Help People with Developmental Disabilities Reinforce Basic Skills. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility, Reno, NV, USA, 23–26 October 2016; ACM: New York, NY, USA, 2016; pp. 43–51. [Google Scholar]
  105. Gamecho, B.; Minon, R.; Aizpurua, A.; Cearreta, I.; Arrue, M.; Garay-Vitoria, N.; Abascal, J. Automatic Generation of Tailored Accessible User Interfaces for Ubiquitous Services. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 612–623. [Google Scholar] [CrossRef]
  106. Godinho, R.; Condado, P.A.; Zacarias, M.; Lobo, F.G. Improving accessibility of mobile devices with EasyWrite. Behav. Inf. Technol. 2015, 34, 135–150. [Google Scholar] [CrossRef]
  107. Savva, A.; Petrie, H.; Power, C. Comparing Concurrent and Retrospective Verbal Protocols for Blind and Sighted Users. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Abascal, J., Barbosa, S., Fetter, M., Gross, T., Palanque, P., Winckler, M., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9296, pp. 55–71. ISBN 978-3-319-22700-9. [Google Scholar]
  108. Navarrete, R.; Lujan-Mora, S. OER-based learning and people with disabilities. In Proceedings of the 2015 International Conference on Interactive Collaborative and Blended Learning (ICBL), Mexico City, Mexico, 9–11 December 2015; pp. 25–34. [Google Scholar]
  109. Ivanchev, M.; Zinke, F.; Lucke, U. Pre-journey Visualization of Travel Routes for the Blind on Refreshable Interactive Tactile Displays. In Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2014; pp. 81–88. [Google Scholar]
  110. Rodriguez-Sanchez, M.C.; Moreno-Alvarez, M.A.; Martin, E.; Borromeo, S.; Hernandez-Tamames, J.A. Accessible smartphones for blind users: A case study for a wayfinding system. Expert Syst. Appl. 2014, 41, 7210–7222. [Google Scholar] [CrossRef]
  111. McDaniel, T.; Viswanathan, L.N.; Panchanathan, S. An evaluation of haptic descriptions for audio described films for individuals who are blind. In Proceedings of the 2013 IEEE International Conference on Multimedia and Expo (ICME), San Jose, CA, USA, 15–19 July 2013; pp. 1–6. [Google Scholar]
  112. Roentgen, U.R.; Gelderblom, G.J.; de Witte, L.P. User Evaluation of Two Electronic Mobility Aids for Persons Who Are Visually Impaired: A Quasi-Experimental Study Using a Standardized Mobility Course. Assist. Technol. 2012, 24, 110–120. [Google Scholar] [CrossRef] [PubMed]
  113. Fuglerud, K.S.; Røssvoll, T.H. An evaluation of web-based voting usability and accessibility. Univers. Access Inf. Soc. 2012, 11, 359–373. [Google Scholar] [CrossRef]
  114. Hassell, J.; James, A.; Wright, M.; Litterick, I. Signing recognition and Cloud bring advances for inclusion. J. Assist. Technol. 2012, 6, 152–157. [Google Scholar] [CrossRef]
Figure 1. Systematic literature review process adapted from [34].
Figure 1. Systematic literature review process adapted from [34].
Applsci 13 05498 g001
Figure 2. Research process map.
Figure 2. Research process map.
Applsci 13 05498 g002
Figure 3. Flow diagram of the database searches and article screenings process.
Figure 3. Flow diagram of the database searches and article screenings process.
Applsci 13 05498 g003
Figure 4. Number of studies published from 2012 to 2022.
Figure 4. Number of studies published from 2012 to 2022.
Applsci 13 05498 g004
Figure 5. Percentage of papers based on publication type (All = 84).
Figure 5. Percentage of papers based on publication type (All = 84).
Applsci 13 05498 g005
Figure 6. Number of articles with descriptions of individual testing stages.
Figure 6. Number of articles with descriptions of individual testing stages.
Applsci 13 05498 g006
Figure 7. Number of users involved in existing user-based research.
Figure 7. Number of users involved in existing user-based research.
Applsci 13 05498 g007
Table 1. Research questions.
Table 1. Research questions.
Research Question
RQ1What have been the trends and demographics of the literature within the field of user testing with users with disabilities?
RQ2What barriers do researchers encounter in user testing where users with disabilities are included?
RQ3Based on existing research and experience, what good practices should be followed for the successful inclusion of users with disabilities in different phases of user-based testing?
Table 2. Articles retrieved from the selected digital libraries using the specified search string.
Table 2. Articles retrieved from the selected digital libraries using the specified search string.
DatabaseURLNr. of Articles
ACM Digital Libraryhttps://dl.acm.org/562
IEEE Xplore https://ieeexplore.ieee.org/152
Scopushttps://www.scopus.com/255
Web of Sciencehttps://www.webofscience.com/116
Together 1085
Table 3. Inclusion criteria.
Table 3. Inclusion criteria.
CriteriaDescription
I1FieldInclude articles that provide information about user-based testing.
I2LanguageInclude articles that are written in English.
I3AvailabilityInclude articles that are accessible electronically.
I4Literature typeInclude articles published in peer-reviewed journals, conference proceedings, or a book (e.g., lecture notes).
Table 4. Exclusion criteria.
Table 4. Exclusion criteria.
CriteriaDescription
E1YearExclude literature published before the Year 2012.
E2DuplicatesExclude any duplicated studies found in multiple databases.
E3Research areaExclude non-computer science or non-HCI literature.
E4Methodology typeExclude non-original research or articles that report the results of a systematic literature review or systematic mapping study.
E5Abstract onlyExclude extended abstracts or posters
Table 5. Quality criteria.
Table 5. Quality criteria.
CriteriaDescriptionAnswer [Weight]
Q1Writing qualityArticles containing well-written essential elements such as an abstract, introduction, materials and methods, discussion, and conclusions from which it is possible to understand the data that interested us in our research effectively.High [10]; Medium [5]; Low [0]
Q2Venue qualityThe article is published in a journal, or in the conference proceeding.High [10]; Medium [5]; Low [0]
Q3User testing description The description of the testing process contained detailed and comprehensively described procedures of individual testing phases such as preparation of tasks, recruitment, execution of the test, etc.High [10]; Medium [5]; Low [0]
Q4Users with disabilities are includedPersons with disabilities are included in the user testing.Yes [10]; No [0]
Table 6. Steps in screening and selection of the relevant literature.
Table 6. Steps in screening and selection of the relevant literature.
StepActivityNr. of Articles
IAutomatic search in digital libraries1085
IIApplying E1852
IIIApplying E6 (removing the duplicates)818
IVScreening by title and reading abstract and conclusion (applying I1–I4, E3–E7)134
VQuality assessment84
Table 7. Data collection form.
Table 7. Data collection form.
DescriptionTypeValue
Section A
EA1Data ExtractorString fieldn/a
EA2Study IdentifierInteger fieldS [1–n]
EA3Name of databaseString fieldn/a
EA4Title of primary researchString fieldn/a
EA5AuthorString fieldn/a
EA6YearInteger field[2012–2022]
EA7Publication typeString field[book chapter/journal paper/conference proceeding]
EA8JournalString fieldn/a
EA9PagesInteger fieldn/a
EA10VolumeInteger fieldn/a
EA11doiInteger fieldn/a
Section B
EB1Type of disabilityMultiple choice1. Not specified
2. Blind
3. Low vision
4. Deaf
5. Hard of hearing
6. Low dexterity
7. Wheelchair user
8. Hemiplegia
9. Paraplegia
10. Quadriplegia
11. Intellectual disability
12. Dyslexia
13. ASD (autism)
14. Down’s syndrome
15. Other
EB2Stage(s) of usability testing described
1. Preparation of tasks
2. Prototype preparation
3. Ethical issues
4. Recruitment
5. Welcome
6. Pre-test questionnaires
7. Test execution
8. Post-test questionnaires
9. Post-evaluation feedback.

String field
String field
String field
String field
String field
String field
String field
String field
String field

n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
n/a
EB3Number of users involvedInteger field[0–n]
EB4Tools/instruments usedString fieldn/a
EB5Best practices identifiedString fieldn/a
EB6Challenges identifiedString fieldn/a
Table 8. Top journals and conferences regarding the number of published articles.
Table 8. Top journals and conferences regarding the number of published articles.
JournalNr. of Articles
Universal Access in the Information Society8
ACM Transactions on Accessible Computing7
Assistive Technology4
Disability and Rehabilitation: Assistive Technology4
Frontiers in Neuroscience2
Procedia Computer Science2
Book chapters and conference proceedingsNr. of Articles
Lecture Notes in Computer Science8
International Web for All Conference2
Iberian Conference on Information Systems and Technologies (CISTI)2
International ACM SIGACCESS Conference on Computers and Accessibility2
CHI Conference on Human Factors in Computing Systems2
Annual ACM Symposium on User Interface Software and Technology1
Annual ACM Symposium on Applied Computing1
International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion1
Table 9. Involvement of users in existing user-based testing research according to the type of disability (in a descending order).
Table 9. Involvement of users in existing user-based testing research according to the type of disability (in a descending order).
Disability TypeN of Occurrences
Blind40
Low vision33
Other30
Intellectual disability12
Deaf7
Wheelchair user7
Hard of hearing5
Low dexterity5
Quadriplegia5
Hemiplegia4
Paraplegia3
ASD (autism)3
Dyslexia1
Down’s syndrome1
Not specified0
Table 10. Challenges of including users with disabilities in user-based testing.
Table 10. Challenges of including users with disabilities in user-based testing.
IDChallengeShort Explanation
C1. Challenges before user testing
C1.1Selecting participants without biasA common type of bias in participant selection is sampling bias. Bias in participant selection can lead to testing results that do not accurately represent the needs and preferences of the targeted user group. Selecting participants with prior interest or knowledge about the tested solution can also impact user testing results [36].
C1.2Understanding participants characteristics, needs, and cultural and personal differencesPlanning user-based testing, especially with users with disabilities, can be challenged by a poor understanding of cultural and personal differences. Researchers do not understand the specific needs and mindsets of users, both with and without disabilities [37]. Users with disabilities may communicate in different ways or require assistive products. Consequently, the user-based testing processes for users with disabilities are not properly designed and executed. For example, if international user testing participants are included, disregarding cultural differences is problematic [38].
C1.3Developing protocols for enabling inclusive user-based testingThe design of an inclusive test protocol requires understanding protocol differences for users without disabilities and users with disabilities. Users with disabilities may have different capabilities and limitations, which in turn can affect their ability to participate in tasks that demand physical activity or cognitive load. Due to individual differences in capabilities between different users, results cannot be directly compared [39].
C1.4Preparing tasks suited to the participants’ ability In user-based testing, where users must complete different tasks, it is essential to prepare tasks that suit users’ abilities. For example, participants with intellectual disabilities have difficulties completing complex tasks [40], and for such users preparing tasks aligned with their cognitive effort abilities is critical.
C1.5Setting up the testing environment according to the specific user needsThe successful implementation of user testing also depends on a properly prepared test environment, which, especially in the case of users with disabilities, must be adapted to the abilities, needs, and other characteristics of the users. Difficulties can occur when setting up the testing environment for each participant with disabilities [41] because different disabilities require different accommodations.
C2. Challenges during user testing
C2.1Getting used to and learning how to use the product by the users The users need help accustoming to the tools or Uis they use for the first time during the testing process [42]. For example, older adults and users with disabilities struggle to use unfamiliar tools [43], which can add additional time and effort to the testing process. This challenge is connected to a variety of factors including the complexity of the UI, the lack of familiarity with the UI, the lack of clear instructions or guidance on how to use the UI effectively, the user’s confidence in using the UI, and the time pressure or the need to complete tasks quickly, which can further increase the difficulty of learning and adapting to new tools.
C2.2Presenting the instructions and tasks to the participants in a clear and accessible wayDuring testing, the researcher has to provide instructions and task presentations tailored to the specific abilities and limitations of the participants to ensure that the tasks are understandable and accessible to all participants. Participants find it difficult to visualize the problems as they need more real experiences (they cannot judge a game only by watching but must experience playing a game) [44]. Instructions being unclear and confusing [45] can lead to confusion and inaccurate feedback, impacting the validity of testing results.
C2.3Preventing disruption of the testing process due to malfunctions of the tested product or tools used in the processWhile performing user testing, there is a risk of the tested environment malfunctions (freezing, failing to update, etc.) [26]. Any disturbance can affect the length of the testing process and, consequently, the well-being and experience of the participant. Poor accessibility of tools and instruments for measuring or data acquisition during testing procedures can affect the ability to fully capture the feedback of user testing participants with disabilities (e.g., visual disability [46]), resulting in potentially incorrect conclusions. Therefore, various measurement tools used during the testing procedures must also be tested for accessibility, primarily if these tools will be used by the users directly.
C2.4Including accessible-only productsProducts being tested that are not accessible and can be used only by users without disabilities (e.g., without visual disability [47]) lead to incomplete and inaccurate results. Therefore, the test results may not accurately reflect how the product or service will be used and accepted by the different end users.
C2.5Using assistive technology in a productive manner Ensuring that the testing environment including all tools used for testing purposes is accessible for participants with different disabilities, the technology is integrated flawlessly into the testing process, preventing any additional barriers for participants with disabilities. For example, the lack of screen reader support on some devices (e.g., smart glasses), a rapidly draining battery used by the device, or dependency on an Internet connection can decrease the success of the testing process. [48]
C2.6Acquiring unbiased feedback from participants During the testing procedures, if others are present (e.g., the participant’s caregiver or the researcher) in the testing room, the participants often feel obliged to provide only positive feedback. Such impact can make participants’ opinions favorable to their caregivers or the researcher [36], leading to biased or incomplete feedback, as participants may be hesitant to provide negative feedback or may not express their true opinions or preferences. In a testing environment setting with observers present during the testing, users tend to increase their emotional control in disadvantageous conditions due to social desirability [49], conforming to the expectations of others rather than their accurate opinions or experiences. This impact can lead to erroneous testing results based on incomplete and dishonest feedback.
C2.7Preventing the influence of the accompanying person on the course of testingThe behavior or feedback of children and users with disabilities who need caregivers (e.g., young adults with special communication needs [50] who cannot participate in testing alone) might be influenced by the presence of an accompanying person (e.g., parents, caregivers, and others). Sometimes, instead of the user with a disability, the caregiver gives input, leading to testing results that do not reflect the participant’s true experiences or feelings.
C2.8Ensuring efficient online or remote user-based testing for all usersUser-based testing in an online environment due to the pandemic or other reasons often results in unreliable findings [51]. Conducting remote user-based testing can bring new challenges, especially when performing research with participants with disabilities when too little support is provided for the users [52]. Participants’ technical barriers, such as bad internet connection, device limits, software compatibility issues, and other challenges, can make testing at home only sometimes feasible [53]. In addition to technical issues, there is an issue of environmental control (background noise or poor lighting affecting focus), which can lead to less engaged participants resulting in biased feedback.
C2.9Organizing on-site testing for participants who have difficulties coming to the laboratory Some user-based tests require strict laboratory conditions and rigorous protocols that cannot be performed remotely [33]. On-site user-based testing is connected to costs, logistics, scheduling, organizing space and testing equipment, time constraints, recruitment, and other travelling-connected inconveniences, which can discourage users’ participation. Therefore, it is necessary to encourage further and enable the conducting of on-site user testing, especially for users with disabilities that otherwise circumstances prevent them or make it harder to come to the laboratory setting.
C2.10Minimizing the influences of persuasive technologiesIntegrating persuasive technologies in the testing environment influences end users’ behavior and attitudes [38], potentially affecting participants’ decisions and user testing results, not accurately reflecting the true user experience. Researchers need to understand the impact of these technologies on user behavior and attitudes, and it is necessary to implement all measures to reduce the effects.
C2.11Providing enough time for completing the tasks Complete novelty, limited experience, or unfamiliarity may impact the results [54], as they affect the time needed to complete the tasks, especially in poor prioritization of testing objectives or unclear time constraints. In the case of including users with disabilities in the testing process, it is necessary to pay special attention to task completion time limits. When the testing process is too complex, with too many steps or variables to consider, subjects can have difficulties performing user testing, requiring more time than expected [55].
C2.12Providing controlled environment testing conditions comparable to those in the real environment When conducting user-based testing in a controlled lab setting, the conditions are carefully controlled to eliminate as many external factors as possible that could influence the test results to ensure reliable results. However, such a controlled lab setting may not always reflect the real-world conditions in which users will be using the product, and testing can provide results that significantly differ from those from real-world settings when users perform regular daily activities [54].
C2.13Ensuring an accessible environment setting during the testingIf the testing environment is not accessible, participants with disabilities can experience difficulties, which will affect testing results. Difficulties in the availability and reliability of the technical infrastructure can create an unsuitable testing environment for an evaluation with users with different disabilities (e.g., visual disability [56]). For example, sitting straight without being able to move the head is potentially challenging for some participants [57]. Next, users with physical disabilities may experience fatigue or discomfort during extended testing sessions, impacting their ability to provide accurate feedback.
C2.14Efficient management of technical problems during the testingSolving unexpected technical problems creates time-consuming and challenging activities not directly connected to user testing [58]. Efficient management of the technical difficulties during the testing can be complex because of the time constraints, complexity issues, and unforeseen issues.
C3. Challenges after user testing
C3.1Providing adequate compensation for participantsNot compensating users participating in testing creates a bad user experience [59]. However, providing adequate compensation for participants in user-based testing involving users with disabilities can be challenging for several reasons. First, there is always a potential risk of developing a conflict of interest or legal and ethical issues that impacts user testing. Providing compensation for participants with disabilities can raise ethical concerns related to exploitation, coercion, and undue influence. Next, the compensation offered may need to vary depending on the type and severity of the disability and the accommodations required to ensure the participant can effectively participate in testing.
C3.2Meeting the expectations of participantsMeeting participants’ expectations in user-based testing involving users with disabilities can be challenging for several reasons. Participants with disabilities may have diverse needs and expectations regarding testing, making it difficult to anticipate and meet all their needs. Participants with disabilities may experience communication barriers that make it difficult to express their expectations or provide feedback during testing. Unsatisfied expectations can be understood as deception, frustration, or poor user experience [60].
Table 11. Best practices for inclusion of users with disabilities in user-based testing identified in the literature review.
Table 11. Best practices for inclusion of users with disabilities in user-based testing identified in the literature review.
IDBest PracticeShort Explanation
BP1. Best practices before user testing
BP1.1Explaining user testing goals to participantsProviding information in an accessible format about user testing activities before starting the test will motivate participants to increase their assurance, self-confidence, and self-efficacy [52,56]. Failing to do so, participants will not fully understand what is expected of them or what they can expect from the testing process, endangering the user experience of participants as well as the final results.
BP1.2Collecting consent from participantsProviding the user’s ethical approval and informed consent enables fair, transparent, and accurate research, minimizing harm [59,61]. Participants should also be allowed to withdraw consent during testing, ensuring their privacy will be respected and no adverse consequences can come from testing activities.
BP1.3Training for participantsProviding the opportunity for training the participants before they start with user testing increases self-confidence, self-efficiency, and motivation, and also reduces stress. [61,62,63]. This activity ensures participants fully understand how to use the product so they can provide more effective feedback. In some cases, a demonstration of how to use the product or service gives participants a better understanding of how it works and what they need to do during the testing process.
BP1.4Preparing clear instructions for participantsProviding clear and concise instructions increases the success rate of testing [19]. It ensures participants understand what they need to do during the testing process by using simple language, breaking tasks down into manageable steps, clarifying expectations, and providing examples so that participants can give effective feedback that will improve the tested product.
BP1.5Using accessibility standardsEnsuring all documents are accessible and standards-compliant to ensure compatibility with assistive products increases testing effectiveness and accessibility for all participants [19]. By following established accessibility guidelines, the involvement of individuals with disabilities is more feasible, ensuring that a wider audience can use the product or service.
BP1.6Design led by people with disabilitiesMoving from design for disabled people to design led by disabled people involves a fundamental shift in the approach to inclusive design and brings people with disabilities into the design process as active and equal partners [64]. Including people with disabilities in user testing ensures that products are accessible and meet the needs of people with disabilities and can lead to more innovative and creative designs that benefit all users, not just those with disabilities.
BP2. Best practices during user testing
BP2.1Enabling user testing from homeAllowing participants to perform user testing activities in their homes reduces stress, increases their well-being and relaxation, and avoids costs [33,53]. Remote testing is, however, possible only if instructions are provided clearly, technical support is established, and a variety of communication methods are set to communicate efficiently with the participants during the testing process. A more significant level of flexibility is also required.
BP2.2Enabling accompanying persons during the testingAllowing participants to be accompanied by their caregivers, friends, or family members increases their safety, comfort, self-confidence, and self-efficiency [36,40,62,65]. Informing participants of this option, providing them with instructions, and organizing space according to the needs of one additional person increases the chance participants with impairments will participate in testing activities.
BP2.3Enabling participants to repeat the tasksThe possibility to repeat each task for participants during the user testing activities increases positive user experience and motivation and reduces stress [61]. Connected to providing clear instructions, a more accurate and comprehensive understanding of the user experience can be gained, and new issues can be identified that may have been missed during the initial attempt. Sometimes, using a different approach or method while repeating the tasks (different devices or software) will help identify various issues.
BP2.4Providing enough time to complete the tasksProviding enough time for performing user testing activities allows participants to obtain a good feeling with minimal stress, motivating them and giving them enough time for preparation without rushing to perform the activities [41,66]. Correct estimates of times needed to perform the activities, allowing breaks and flexible schedules ensure that participants can effectively use the product or service being tested and provide comprehensive feedback.
BP2.5Organizing user testing activities with sufficient breaksProviding breaks between user testing activities allows a longer concentration of participants [19], ensuring that participants remain engaged and focused throughout the testing session. Schedules should be organized strategically, although flexible according to the user testers’ needs, accompanied by refreshments, reducing fatigue, and increasing the effectiveness of the testing session.
BP2.6Ensuring supervision by professionals in the field of user-based testing with users with disabilitiesIncluding expert supervision in user testing activities with users with disabilities increases the likelihood that requirements and good practices are carried out in testing [66,67]. The presence of a professional in this field who will monitor the testing sessions, develop a testing plan, and provide feedback and guidance will ensure that the testing is conducted in a structured and effective manner and that the results are accurate and reliable.
BP2.7Involving several evaluators in the user testing processOne of the main precautions that should be taken while conducting a usability test with learners who are blind is always to include at least two evaluators: one mediator and one observer. If the same evaluator who gives instructions and mediation also tries to make detailed field notes, this person is more prone to pay attention to some usability issues [68]. Involving several evaluators and assigning them straightforward tasks in the user testing process can help ensure the testing results are accurate and reliable, especially if results are regularly reviewed, discussed, and compared, and evaluators collaborate.
BP2.8Creating comfortable surroundingsProviding comfortable surroundings (such as a quiet and comfortable location that is free from distractions and refreshments) and accessible infrastructure has an impact on concentration and better test performance [53]. Participants who feel at ease are likelier to provide honest and detailed feedback.
BP3. Best practices after user testing
BP3.1Compensating participants for their contribution to user testing activitiesProviding compensation to attend user testing for participants increases the assurance of participation in testing and the seriousness of involvement [62,63]. In addition to appropriate compensation that reflects the time and effort required of participants and the value of their contribution, it is essential also to show gratitude and general appreciation for their time and input.
Table 12. Mapping of identified good practices to identified challenges in user-based testing with users with disabilities.
Table 12. Mapping of identified good practices to identified challenges in user-based testing with users with disabilities.
Best Practices
BP 1.1BP 1.2BP 1.3BP 1.4BP 1.5BP 1.6BP 2.1BP 2.2BP 2.3BP 2.4BP 2.5BP 2.6BP 2.7BP 2.8BP 3.1
ChallengesC1.1
C1.2
C1.3
C1.4
C1.5
C2.1
C2.2
C2.3
C2.4
C2.5
C2.6
C2.7
C2.8
C2.9
C2.10
C2.11
C2.12
C2.13
C2.14
C3.1
C3.2
Note: A cell with a black background depicts the link between an individual objective and an individual best practice.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Šumak, B.; Kous, K.; Martínez-Normand, L.; Pekša, J.; Pušnik, M. Identification of Challenges and Best Practices for Including Users with Disabilities in User-Based Testing. Appl. Sci. 2023, 13, 5498. https://doi.org/10.3390/app13095498

AMA Style

Šumak B, Kous K, Martínez-Normand L, Pekša J, Pušnik M. Identification of Challenges and Best Practices for Including Users with Disabilities in User-Based Testing. Applied Sciences. 2023; 13(9):5498. https://doi.org/10.3390/app13095498

Chicago/Turabian Style

Šumak, Boštjan, Katja Kous, Loïc Martínez-Normand, Jānis Pekša, and Maja Pušnik. 2023. "Identification of Challenges and Best Practices for Including Users with Disabilities in User-Based Testing" Applied Sciences 13, no. 9: 5498. https://doi.org/10.3390/app13095498

APA Style

Šumak, B., Kous, K., Martínez-Normand, L., Pekša, J., & Pušnik, M. (2023). Identification of Challenges and Best Practices for Including Users with Disabilities in User-Based Testing. Applied Sciences, 13(9), 5498. https://doi.org/10.3390/app13095498

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop