Next Article in Journal
Factors, Prediction, and Explainability of Vehicle Accident Risk Due to Driving Behavior through Machine Learning: A Systematic Literature Review, 2013–2023
Previous Article in Journal
Development and Verification of Coupled Fluid–Structure Interaction Solver
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Human–Object Interaction: Development of a Usability Index for Product Design Using a Hierarchical Fuzzy Axiomatic Design

by
Mayra Ivette Peña-Ontiveros
1,
Cesar Omar Balderrama-Armendariz
1,
Alberto Rossa-Sierra
2,*,
Aide Aracely Maldonado-Macias
3,
David Cortés Sáenz
1 and
Juan Luis Hernández Arellano
1
1
Design Department, Universidad Autónoma de Ciudad Juárez, Ave. del Charro 450 Norte, Ciudad Juárez 32310, Chihuahua, Mexico
2
Facultad de Ingeniería, Universidad Panamericana, Álvaro del Portillo 49, Zapopan 45010, Jalisco, Mexico
3
Department of Industrial and Manufacturing Engineering, Universidad Autónoma de Ciudad Juárez, Ave. del Charro 450 Norte, Ciudad Juárez 32310, Chihuahua, Mexico
*
Author to whom correspondence should be addressed.
Computation 2024, 12(6), 130; https://doi.org/10.3390/computation12060130
Submission received: 5 April 2024 / Revised: 16 May 2024 / Accepted: 23 May 2024 / Published: 20 June 2024

Abstract

:
Consumer product usability has been addressed using tools that evaluate objects to improve user interaction. However, such diversity in approach makes it challenging to select a method for the type of product being assessed. This article compiles the concepts used since the origin of usability in product design. It groups them by attributes to formulate a usability index proposal. Due to the nature of the data, fuzzy, hierarchical, and axiomatic tools were applied to a trial group of experts and users. Three questionnaires were designed and administered throughout a five-stage process to collect and select attributes, rank them in importance, assign fuzzy values, obtain their numerical representation of use, and assign a qualitative category. By analyzing a case study, this research demonstrates the value of the index by comparing the use of computer mice. Unlike other approaches to evaluating usability, the proposed index incorporates the hierarchical importance of attributes. It allows for participants to express their opinions, transforming subjective responses into linguistic values represented in triangular areas, resulting in a more accurate representation of reality. Additionally, the complexity of the human–object interaction is treated by an information axiom to compute the usability index on a scale from 0 to 1, which reflects the probability of the product meeting the desired usability attributes.

1. Introduction

The term “usability” has been used to measure the relationship between humans and objects to improve their interaction. The ISO 9241:11 standard defines usability as the ease of use of an object, the degree of effectiveness, efficiency, and satisfaction with which users can reach specific objectives and carry out activities with it in a specific context [1]. Also, usability measures the quality of a user’s experience when interacting with products or systems, focusing on ease of use, learnability, efficiency, memorability, and error prevention [2]. Norman has defined usability as a combination of ease of learning, ease of use, and user satisfaction [3]. Krug has defined it as a quality attribute that measures the user interfaces’ ability to use [4].
When it comes to building user-friendly products, there are several key considerations to keep in mind. These include identifying and addressing issues early on in the development process, ensuring that users can successfully accomplish their intended tasks with the product, using metrics to measure progress objectively, adapting evaluation processes to suit the needs of various stakeholders, and following a set of guiding principles to identify and improve upon key usability features. By prioritizing these factors, we can create products that are both effective and easy to use [5,6].
The definition of the ISO 9241:11 standard has been the most accepted terminology for enhancing usability processes. However, multiple other concepts have emerged to the point of being classified as an umbrella concept [7]. Therefore, this work aims to explore the concepts outlined in the literature since the inception of the product’s usability. Some common approaches and attributes that must be considered for usability evaluation are quality, observation, and interaction between the user and the product to measure efficiency, effectiveness, satisfaction, and functionality [8]. Whichever usability evaluation method is used, several other factors found in the design principles or guidelines must be considered [9]. Therefore, attributes can be a factor of greater importance when evaluating a usability evaluation proposal.
Tools have been developed to help designers evaluate performance during the design stages [10]. For example, Goo et al. proposed a conceptual design methodology integrating axiomatic design and the hierarchical structure of failure mode, effects, and criticality analysis [11]. Baquero, Rodríguez, and Ciudad used fuzzy logic theory to develop a user experience-based proposal for usability assessment [12]. A meta-standard has also been reported to provide a framework for all experts from different fields of knowledge and to raise communication awareness regarding how usability is characterized, represented, and operationalized in other fields [13].
Despite the existence of techniques and questionnaires addressing usability evaluation, these have been used merely to identify missing attributes or to compare several products to each other. The understanding that most use-related attributes in a product are intangible complicates their measurement in quantitative values. As can be understood, the tools to evaluate usability must be straightforward and easy to use, as opposed to some methods that lack relevant information regarding how and where to use them, whose applications are scarce, and which offer incomplete information on what is being assessed [10]. Considering this, several research inquiries emerge: How can we amalgamate all usability principles into a singular evaluation tool? How do we factor in the significance of each attribute based on the product type? How can we mitigate the subjectivity of both experts and users? How can we assess a product’s usability with a quantitative/qualitative assignment?
Method analysis is undoubtedly essential to guiding product designers to obtain more precise, satisfactory, functional, and quality designs [14]. Complexity has been managed by applying multi-attribute methods that help treat subjectivity and uncertainty [15,16,17]. Specifically, fuzzy axiomatic design is a method for decision-making on multiple-criteria problems [18,19,20,21,22]. This paper aims to introduce a usability index that can be used to evaluate the design of products in terms of their usability. To achieve this objective, a five-stage process was proposed, which involved collecting and selecting attributes, assessing their importance, assigning fuzzy values, and ultimately computing the usability index. To facilitate this process, three questionnaires were designed and administered using the principles of axiomatic design, AHP, and fuzzy logic. To put this index to the test, a case study was conducted in which design experts and potential users were consulted to evaluate three different computer mice. To achieve this, Section 2 presents the method, describing the formulas and the theoretical basis of the methodological tools used. Section 3 summarizes the data obtained from the search for information and the results of applying questionnaires and formulas. Section 4 of this document focuses on analyzing the procedure used and the results, comparing them with other research. Finally, the conclusions in Section 5 show the observations obtained from the usability index’s approach, use, and analysis.

2. Materials and Methods

The development of the product usability index proposal involved several stages. Figure 1 shows the development and analysis procedure and the results obtained during the product usability index process. This five-stage procedure will be broken down throughout the description of this research.

2.1. Establishing Attributes

The first stage consisted of reviewing the research on usability in product design, which has taken place over time, to analyze the attributes that must be included in the usability study.
A literature review was conducted to select proposals focused on usability. The words used were chosen from a search through the Science Direct, Springer, and EBSCO databases as well as from related documents shown by the Google Scholar search engine. Figure 2 shows the usability-related keywords, which were used in both English and Spanish.
This stage aimed to find variables intervening in various usability processes to identify relationships between them and develop classifications that might help while considering which attributes to include.

2.2. Attribute Assessment

The second stage consisted of assessing the relevance of the questionnaire items, which was performed through statistical means (Cronbach’s alpha). Three different questionnaires were used to (1) rank attributes by importance, (2) rate what is expected from product design, and (3) evaluate the product. It should be clarified that questionnaires 1 and 2 were to be administered only to a group of experts and were required when a new product category was started. The purpose of the third questionnaire was to help determine the value that users assign to the product. It would be used to evaluate and classify the usability of the object being studied without the need to administer additional questionnaires. If necessary, the number of questionnaire items at this stage will be reduced when the statistical analysis reveals the need to do so.

2.3. Establishing Weight

The purpose of the third stage was to establish the weight for each attribute using the first questionnaire. It was only administered to experts (professionals in product design) as it aimed to rank attributes by importance, using pairwise comparisons through the AHP (analytical hierarchy process). AHP is a support technique for multi-criteria decision-making based on ranking, paired comparison, and importance weights. It is widely used in the literature as one of the best techniques. The method was proposed by Thomas Saaty in 1980 and consists of converting subjective evaluations of relative importance into a set of total weights to be used later to select the best alternative [23]. The technique allows for efficient and graphic information organization using matrix algebra. In other words, as Saaty (2008) explains, “It is about breaking down a problem, situation, or scenario and then bringing together all the solutions to the sub-problems into a conclusion” [24]. In consistency with Maldonado et al. and Awan et al., the following process was followed [25,26].
  • The first step was to establish the alternatives to be compared, which are represented by
A i = 1 , 2 , , n
  • The second stage consisted of establishing the attributes, with B_i = 1, 2, …, “m” being the number of attributes.
  • Next, a group of experts were chosen who drew on their own judgement to rank each attribute in order of importance to obtain a weight, which would result from the AHP’s pairwise comparison and the geometric mean, as shown in Equation (1):
W   B = W   B 1 ,   W B 2 ,   ,   W   B j 1 j
  • Based on Entani et al., the first step is to determine the attributes with m; then, a pair of attributes is compared and generates all possible pairs, thus obtaining the A comparison matrix as shown in Equation (2) [27]:
A = a i j = 1 a 1 m a 1 m a 1 m 1
  • where a_ij shows the priority relationship for attribute i as compared to that of attribute j. Pairwise comparisons were given, as observed in the case of attribute m.
  • The W_ij weights were obtained from the matrix, using the eigenvector method and Equation (3):
A · w = λ · w
where λ is the eigenvalue and w is the respective eigenvector. The weight vector obtained was W = (w_1, …, w_n)^t, which corresponds to the main eigenvalue λmax.
  • The sum of the weights obtained was normalized, and the final decision matrix values were used as final weights for the respective attributes.
  • The alternatives were ranked by W_j in descending order, with the highest value signaling the most preferred alternative.
The weight for each attribute was obtained from the AHP and the geometric mean, taking into account the importance given to them by the experts [25].
According to the Saaty scale, it was chosen to define the values that the expert-issued judgments could take and use for the AHP tool. This scale can represent the value judgments used to compare two alternatives in any given criterion; that is, it can be used to establish the importance or preference of alternatives in a pairwise comparison matrix. The scale granted each of the comparisons homogeneity and a certain degree of certainty, as these pairwise comparisons can quickly be based on intuition, data, previous analysis, or experiences [24,28].
Finally, the AHP method allows for the assessment of the congruence between judgments and the inconsistency ratio (IR); before establishing an inconsistency, it was necessary to determine the consistency index (CI) through the number of attributes (n), and λmax is the highest eigenvalue of the matrix. Equation (4) was used to obtain the consistency index [20]:
C I = λ m a x n n 1
By obtaining the CR, the consistency ratio was calculated using Equation (5):
C R = C I I R
where CI is the consistency index and IR is the inconsistency ratio

2.4. Obtaining Fuzzy Values

Fuzzy logic is a form of logic used for reasoning that deals with approximate rather than fixed and exact values. Unlike traditional binary logic, where variables take true or false values (1 or 0), fuzzy logic variables may have a truth value that ranges between 0 and 1. This approach allows for more flexible and realistic modeling of real-world scenarios where information is often imprecise or uncertain [12].
To grasp the significance of fuzzy values in application, it is essential to comprehend their role in axiomatic design. The information axiom dictates that the probability of success can be computed by defining the design range (DR) for the functional requirements (FR) and evaluating the system range (SR) provided by the proposed design to fulfill the FR [29]. These ranges may intersect, creating a common range (CR), denoting a successful result. Fuzzy values can then be employed to interpret and analyze these ranges and intersection areas to reduce subjectivity in experts’ and users’ opinions.
To carry out this process, the results must be transformed into fuzzy data; therefore, fuzzy logic was used since such a reasoning mode applies multiple truth or confidence values to the resolution of a problem and also shows different degrees of membership, unbounded, and based on all or nothing [12].
The fuzzy information axiom has been proposed by Kulak and Kahraman to solve multi-attribute decision-making problems that have linguistic information [19]. Fuzzy data can be fuzzy linguistic terms adapted to tangible and intangible attributes. For this, they are first transformed into fuzzy numbers, and then all fuzzy numbers are assigned a classification, which systematically converts the linguistic terms to their corresponding fuzzy numbers. The system contains five-story conversion scales, such as “poor-fair-good-very good-excellent” for intangible criteria and “very low-low-medium-high-very high” for tangible criteria [19,30]. The concept of fuzzy logic provides a logical framework to handle approximate reasoning and evaluate options effectively [31].
To develop this analysis, two questionnaires were designed: one to evaluate what is expected from the product and the other to evaluate what the product has. The connection between AHP and axiomatic design lies in using the latter’s functional requirements with a weight determined by AHP.
During stage 4, Questionnaire 2 was administered to 5 experts and used to obtain the design range. In contrast, Questionnaire 3 was administered to 104 users to obtain the system range and, thus, evaluate a specific product. This was performed using the fuzzy axiomatic design tool concepts in the information axiom. A result analysis was conducted by focusing on the information axiom, as this makes it possible to obtain quantitative data and determine the system range (SR) of a functional requirement (FR) (which, in this case, would relate to the attributes) as well as the specified design range (DR). An SR is what the system can deliver (that is, what it can do), while the DR refers to what it should ideally achieve regarding tolerances or specification limits. The way to measure it is by assessing its probability of satisfying the FR, that is, the necessary information to fully satisfy an FR, and it is measured through the intersection of the designer-specified DR and the SR, which is what the implemented solution can reach; such an intersection is called the common range (CR). The design with the highest probability of existing is, thus, the best design [30,32].

2.5. Establishing the Usability Level

During the fifth phase, the usability index was established from the results obtained in the AHP (weight) and the fuzzy axiom design (value) to evaluate the product using the information axiom equations to obtain the usability information content (UIC). Equation (6) was used to calculate the UIC. This was performed by multiplying the weights obtained for each of the attributes; both the main attributes (level 1) and the secondary ones (level 2) were considered. This equation is based on Nam Suh’s AD. For Equation (7), each alternative to determine the level of usability was computed using the attributes as usability criteria [33,34].
U I C = l o g 2 A S A C
where AS is the system range area in a fuzzy triangular number, AC is the common range area, and UIC is the usability information content. Nam Suh established this formula by developing the information axiom for axiomatic design.
U I C   W = W A H P U I C
where the WAHP is the weight obtained from each AHP attribute, whether of level 1 or 2, and UICW is the usability information content that includes the AHP weight.
Equation (8) was used to obtain the total usability information content; the equation is the sum of all the UIC weights.
T U I C = i = 1 w U I C W
where TUIC is the total usability information content and the UIC is the usability information content.
The product featuring the lowest TUIC is considered the best design option as far as product usability is concerned.
The process by which the total information content was obtained allows for the calculation of the highest and lowest values according to experts’ opinions.
U S i n d = 1 T U I C T U I C m a x T U I C m i n
where USind is the usability index and was calculated in numbers ranging from 0 to 1, see Equation (9), where 1 represents the highest usability value.

2.6. Interpreting the Usability Level Results

The proposal to interpret the TUIC results used a 7-item Likert scale. As has been demonstrated, inter-rater reliability is optimized using 7-point scales; the higher the number of levels on the scale, the more accurate the results [35]. Several items do not increase the tool’s reliability [36]. The levels used were the following: excellent, very good, good, fair, poor, very poor, and terrible. This is how the usability level was established according to the TUIC.

3. Results

This section shows the results obtained after following the described process. First, the selection of the factors is explained. Then, a description of the design of the questionnaires and their reliability is given as well as of the application of AHP and axiomatic design. Finally, the index and the level of product usability are obtained.

3.1. Attribute Identification—Literature Review to Obtain Weights and Set Boundaries for the Attributes

Various results on usability-related topics appeared in the search through the different databases, yet only the information related to product design published between 1954 and 2024 was used. As a result, 39 field-related publications (books, chapters of books, articles, and conferences, among others) were chosen, as they proposed tools to evaluate usability. The publications were ranked in importance according to their number of citations. The attributes mentioned in each publication were grouped by common variables and the relationship between them. This produced a classification of three main factors: context, user, and product.
Figure 3 shows a total of 77 attributes found and classified into three main clusters, subdivided into ten categories. Because the usability assessment was intended to evaluate the attributes that were designed into the object, this research took 26 attributes found in the “product” cluster, which is subdivided into quality, functionality, and aesthetics. The attributes contained in the context and the user will be present in those who answer the proposed questionnaires.

3.2. Working with the Attributes

Questionnaires were designed and administered to experts and product users to obtain information regarding the attributes found to evaluate usability. This section presents the design and reliability of the questionnaires using the attributes outlined in the previous section.

3.2.1. Questionnaire Design

The questionnaires were classified as follows:
  • The first questionnaire aimed to rank the usability attributes’ importance by obtaining a weight for them using a hierarchy pairwise comparison (in consistency with AHP). This questionnaire was administered only to experts in the field of evaluation chosen according to the type of product.
  • The second questionnaire was used to measure the design range (axiomatic design), where the product is evaluated according to what the experts believe should be the lowest expected value for each product’s attributes. Questionnaires 1 and 2 should only be administered when the experts’ assessment of a type of product has yet to be given.
  • The third questionnaire was designed to measure the system range (axiomatic design), which evaluates the product according to what it features, based on the attributes that are presented; it was administered only to users.
These questionnaires were developed based on existing product categories so that different types of products could be evaluated. In this research, the questionnaires’ focus was electronic products with high user-interaction, for example a mouse, a keyboard, a laptop, speakers, etc. Table 1 shows part of the developed questionnaire.

3.2.2. Administering Questionnaires to Experts

Five experts in this field were chosen on the grounds of J. Nielsen’s research, which poses that a single evaluator does not find as many usability problems as five evaluators do and that the inclusion of 15 evaluators stabilizes usability problem identification. In his analysis, Nielsen creates a cost–benefit ratio between the number of evaluators involved, holding that the number of evaluators needed does not have to be too large. Thus, together with Landauer, he proposes the ideal number of three to five evaluators since this number suffices to find approximately 75% of the errors [37,38].
To assess this tool’s efficacy, electronics goods were chosen as the subject for questionnaires 2 and 3, which evaluated design range and system range, respectively. The assessment centered on three wireless mouse designs, which are commonly used by university students and feature a range of unique shapes, textures, and functionalities. Table 2 outlines the general specifications of the mice under consideration. Before conducting an object interaction evaluation, a method for navigating and exploring the on-screen functions of each mouse was devised and put into practice.

3.2.3. Cronbach’s Alpha Results: Tool Reliability

Table 3 shows the Cronbach’s alpha results using the SPSS statistics program. All the data obtained from each questionnaire were entered, and as can be observed, the minimum Cronbach’s alpha was 0.914 for questionnaire 3—product C’s system range—and the highest result was 0.955 for questionnaire 2—design range. As mentioned above, the recommended threshold is 0.1, which is considered good reliability and consistency [39]. Thus, the questionnaires’ structure and design were deemed suitable for the conduction of this research.

3.3. Obtaining a Weight for Each Attribute Using the AHP Tool (Questionnaire 1)

This section presents the analysis of the first questionnaire, administered to experts only, which used pairwise comparison among attributes to determine their degree of importance and the AHP tool to assign weight to each of them. It should be noted that the information presented in this section has been summarized to provide a better understanding of the process (Figure 4).
The results obtained by the experts are compiled, and each result is replaced with its respective value in the AHP scale to obtain the weights shown in Figure 4. Table 4 shows the results obtained from each of the consistency clusters obtained through Equations (4) and (5).
A circular graph was drawn, which showed the values together for better visualization of each of the attributes’ importance and weight during product usability evaluation. The percentages show the final weight of each attribute and sub-attribute (See Figure 5).

3.4. Obtaining Fuzzy Values—Fuzzy Axiomatic Design

This section will describe the axiomatic design analysis of the information axiom but using fuzzy logic. Two questionnaires based on the attributes found were applied: one to evaluate what was expected of the product (DR) and another to evaluate what users thought the product would deliver (SR).

3.4.1. Design Range Questionnaire Results

The first step in obtaining the design range (DR) was to substitute the answers for the respective linguistic term value in Figure 6, which is expected to be the lowest desirable for the product. These attributes can be tangible or intangible according to their objective and sub-objective interpretations. Several triangular figures were obtained from the answers and substitutions of these values and were used to obtain the common area between the design and system ranges. The attributes’ lowest values in the experts’ answers were used due to the expectation of a product’s minimum features in terms of usability.
Table 5 shows the quality attributes alone as a sample of the terms used.

3.4.2. System Range Questionnaire Results and Experts’ Common Area

To calculate the system rank (SR), it was first necessary to identify the tangible and intangible attributes since, as mentioned above, they have different fuzzy values. This research work considered most attributes intangible, leaving only efficiency, effectiveness, mental workload, and physical workload tangible since it was possible to assess their degree through a numerical measurement scale. After identifying the fuzzy values for each response, each attribute’s arithmetic mean was obtained to determine the system’s area according to the evaluated product. Table 6 shows the results in terms of aesthetics.
Table 7 shows an example of the fuzzy values for the SR and the DR (questionnaires 2 and 3). The common area (CA) was obtained geometrically, and it encompasses the SR–DR intersection.

3.5. Establishing the Usability Level

The results obtained from the weight (AHP), the SA, and the CA (fuzzy information axiom) were used to establish the usability index through the obtention of the usability information content (UIC), which was calculated using Equation (6). Then, the UIC was calculated with WAHP using the weight of the level 1 attributes. For those in level 2, Equation (7) was used to, in turn, obtain the total usability information content (TUIC) for each alternative; this was performed using Equation (8).

Content of the Usability Information

Table 8 shows the UIC and TUIC results obtained from a random sample of 102 computer mouse users. The table features the comparison between the chosen A, B, and C products. As can be observed, Product B holds the lowest TUIC value with 0.548; this makes it a better proposal, leaving product A in second place with 0.834. Product C ranks third place with the highest TUIC of 0.884. This means that Product B’s usability is the highest, and that of Product A is the lowest.

3.6. Interpreting the Usability Level for Product Design

This section shows the process used to establish the usability level. The TUIC results obtained by the experts were taken as a basis to find an index based on the limits of each value. Therefore, it was important to establish the minimum and maximum values for the results that can be obtained through this procedure specifically and use them as a basis to obtain the total area and then a proportion of the total area according to the TUIC obtained in each case to hold as a representation of the usability index.

3.6.1. Minimum and Maximum TUIC Values

Because the purpose of this research was to obtain a value in the form of an index, the fuzzy minimum and maximum values were obtained to obtain a maximum area that could provide a way to compare the individual results in an assessment. Using the same procedure Nam Suh mentions [40], the lower the information content, the better the design.
In this case, the minimum expected was 0. If the users give an excellent rating to the product in the system range (SR) with a fuzzy value, which in this case was considered intangible (0.8, 1, 1), and if the minimum result of the design range (DR) (0.2, 1, 1) is taken, the system area (SA) and common area (CA) will be identical; this means that the value given to the product (SR) was entirely within the minimum range expected for the product, as defined by the experts (DR). Thus, the product met all the attributes excellently.
Considering that the TUIC minimum value was zero, the maximum value was calculated similarly yet inversely. Instead of taking the highest linguistic term, the lowest was taken for the SR: poor or lower with a fuzzy value of (0, 0, 0.3), and for the DR, it was the same that was used to determine the minimum since it had been established by the experts from the beginning. This way, the minimum intersection between DR and SR was obtained; therefore, the AC needed to be bigger, which meant that the product met a minimum part of what was expected and, therefore, featured a lousy design. Table 9 shows an example of the results for the “shape” attribute and the maximum TUIC of 4878, which results from the procedure shown in Table 8, only changing the already mentioned low values.

3.6.2. Usability Level Scale

When obtaining a minimum TUIC of 0 and a maximum of 4878, a seven-item, Likert-based scale is proposed to classify results as it has been shown that the more levels there are in a scale, the more precise the results will be. For example, five levels offer less variation than seven or eight [35], and using seven or eight items does not affect the tool’s reliability [38]. Thus, to have a more sensitive selection that does not harm the reliability of this tool, the seven-level scale was chosen. The maximum TUIC (4878) was divided by 7, as is shown in Table 10, to determine the level according to the TUIC.

3.6.3. TUIC, USind, and Products A, B, and C Usability Level

The usability level of each evaluated product was identified based on the TUIC result. The usability index (USind) is calculated using Equation (9), which considers the lowest and highest possible values. As shown in Table 11, the results show that product B has an excellent level of usability, featuring a USind of 0.887. At the same time, A and C are considered very good with a USind of 0.829 and 0.819, respectively. Because the results are similar, the products are considered to have an acceptable level of usability.

4. Discussion

The suggested linear structure in the present study is of utmost importance as it offers a structured approach to monitor the various phases of investigation, including the methods implemented, analyses performed, and outcomes achieved. It is crucial to prioritize the design elements based on their impact on the product’s usability. Furthermore, this structure could identify potential areas of improvement for existing design ideas and guide the development of new ones by assessing their compliance with the necessary attributes required for the product design.
The value of the present proposal compared to other studies is differentiated in the quantitative score (0 to 1.0) that allows for the classification of the analyzed objects with a maximum and minimum description (excellent to appalling) to be able to catalog the products and to compare them at a general level according to the type of product. The process to carry out the proposal was supported by statistical and methodological tools recognized for multi-attribute decision-making: Cronbach’s alpha to evaluate the items in the questionnaires, AHP to determine the weights in the attributes, fuzzy logic for a better approximation of the subjective values, and axiomatic design in the determination of the probability that meets the design requirements.
It is important to note that the literature review led to the detection of a universe of attributes that different authors have classified as relevant to usability. The current project’s goal was to carefully consider all potential attributes to ensure that no characteristic related to usability was overlooked. It was decided to rely on experts and users to evaluate the attributes based on the specific product type. To minimize deviation in expert opinions, it was recommended to involve three to five experts from Nielsen and Landauer [38]. Furthermore, paired comparisons in AHP have proven to be an effective method for assigning hierarchical weights across multiple criteria. Performing these comparisons in a fuzzy environment helps reduce subjective opinions and supports experts in making informed decisions. Afterward, it was found very useful to classify the attributes in three approaches directed to the product, the context, and the users, where Calvo, Ortega, and Saez mention that, to generate and discover a better design, it is necessary to know how users work, how they manipulate it, and how they perform the tasks for which the product was designed. Therefore, there is only usability with the user; there is usability with context and much less without the product itself [41]. Of course, we cannot change the characteristics of people nor those of the context, so the present proposal focused on working with a grouping of 26 product characteristics sub-classified into aesthetic, quality, and functionality attributes. It should be noted that nine of these attributes were closely related (Figure 3) through an intersection with the context and people, which, according to Nielsen, are the relevant points to develop a good design [36].
In order to verify the compliance of the 26 product characteristics in a product design, three types of questionnaires were used. Statistical Cronbach’s alpha verified the attributes’ pertinence according to the groups’ objectives. The first one was developed to determine the weight of each attribute with the AHP tool; in this way, the importance of each one is determined by experts. In the second and third questionnaires, the fuzzy axiomatic design tool was implemented, where the second questionnaire was to determine what is expected from the product; this is also defined by the experts (RD), which will be compared against the results obtained from the third questionnaire, which was evaluated by 104 users, what the product has (RS). The AC and AS are calculated to determine, together with the weight, the usability index (CIUT) and the usability level.
To interpret the proposal, a case study of a computer mouse was used to exemplify how to obtain the usability index. The product type for the case study was selected based on the objects that university design students would use. For this, we selected electronics products, where the opinion experts and users compared commercial mice called A, B, and C. The results showed that the product with the highest level of usability was product B, with a usability index of 0.887 (excellent), and the usability index of product A was 0.829 (very good), and that of product C was 0.819 (very good). According to the scale proposed, from 0 to 1, the highest number will be that product with a higher probability of satisfying usability attributes. If another electronic product is evaluated, data from questionnaires 1 and 2 are used, and just questionnaire three must be applied to perform comparisons. This means that changing the type of product will change the results.
We have to consider that the usability index, in addition to evaluating the product, also compares different types of designs and determines which product has a better level of usability. A database will help store the design range (DR) and weight of the different types of products to expand and improve the usability index.
It should be noted that this study is based on the application of AHP, fuzzy logic, and axiomatic design, which are methods widely used in multiple disciplines to reduce the complexity of decision-making. The combination of these has resulted in many reports in the bibliography regarding hierarchical fuzzy axiomatic design, for example, in the selection of prototypes [42], in the selection of hydrogen storage [22], to resolve problems in the intermodal transportation networks [43], in the evaluation of the blockchain deployment [44], in the alternative selection of product remanufacture [45], or to determine the optimal onshore wind farm site [46].
Despite various research on product usability evaluation, just a few projects have focused on proposing an index related to usability. Utamura et al. suggested an index scale to evaluate user experience by applying the magnitude estimated method of psychophysics to a broad sense that includes surprises, fun, and easy to use [47]. Kim and Han developed a methodology to obtain a usability index of consumer electronics products by classifying dimensions, developing measures, and building usability index models [48]. The Brandy et al. project was the only index found related to a usability index for product design by applying 14 items based on the SUS questionnaire and a formula considering the weighted mean of a Likert scale in questions [49]. Compared to existing projects, the present study utilizes a broad range of attributes and applies different methodology tools to make the proposal more systematic and trustworthy.

5. Conclusions

The present work proposes obtaining a usability index based on the collection of the most important attributes found in the bibliography over the years. Attributes were evaluated by expert opinion using paired comparison matrices, and then fuzzy areas were determined to provide numerical data based on expert opinion. Finally, the usability index was determined by axiomatic design through quantitative evaluation and importance ranking.
Three questionnaires were designed and applied: two for the evaluation of attributes (only applied once to experts) and the third questionnaire applied to users each time a product is to be evaluated. The process proposed for using the index presented a systematization of the activities. At the same time, the case study results were decisive in selecting the best product according to its usability characteristics. The work presented is considered a reliable alternative for evaluating the interaction between the user and the products.
Developing and providing feedback on the concept of usability is crucial, as it is often overlooked in product evaluation. This research provides a comprehensive review of the usability context that designers and practitioners must consider along with an alternative approach to evaluating a product’s usability. An innovative solution for calculating a usability index by harmonizing usability attributes in a single tool, we aim to provide a comprehensive evaluation of all aspects of product design with a specific focus on improving usability. The contribution of this work involves three main actions: thoroughly considering all usability characteristics in the bibliography for product design, translating intangible data to quantify imprecise human judgment, and determining an absolute value based on the probability of meeting required attributes.
This proposal’s challenges involve using questionnaires, which may require a significant time commitment to identify and select a group of experts and a sample size of users that accurately represents the population. To streamline the implementation of the index, efforts should focus on developing an application that can efficiently organize the distribution of the questionnaires, calculate the weights and areas of the design range, and store the data for future purposes. In the future, further implementation of the index on different types of products is required. Although the result of the calculations is based on an axiom (a proposition that does not require demonstration), performing a sensibility analysis should be applied to qualify the impact of variables in the index, providing more detail on the proposal, which is based on proven methodological tools.

Author Contributions

Conceptualization, M.I.P.-O. and C.O.B.-A.; methodology, C.O.B.-A. and A.A.M.-M.; software, M.I.P.-O.; validation, D.C.S. and J.L.H.A.; formal analysis, C.O.B.-A. and A.A.M.-M.; investigation, M.I.P.-O.; resources, A.R.-S.; data curation, M.I.P.-O.; writing—original draft preparation, M.I.P.-O. and C.O.B.-A.; writing—review and editing, A.R.-S.; visualization, A.A.M.-M.; supervision, C.O.B.-A.; project administration, C.O.B.-A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Date is unavailable due privacy.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. ISO 9241-11; Ergonomics of Human-System Interaction—Part 11: Usability: Definitions and Concepts. ISO: Geneva, Switzerland, 2018; p. 36. Available online: https://www.iso.org/standard/63500.html (accessed on 1 April 2024).
  2. Shneiderman, B.; Plaisant, C. Designing the User Interface: Strategies for Effective Human-Computer Interaction; Pearson Addison-Wesley: San Francisco, CA, USA, 2009; pp. 707–708. [Google Scholar]
  3. Norman, D. The Design of Everyday Things; Basic Books: New York, NY, USA, 2002; p. 78. [Google Scholar]
  4. Krug, S. Don’t Make Me Think: A Common Sense Approach to Web Usability; Pearson Education India: Chennai, India, 2000; p. 9. [Google Scholar]
  5. Esquivel, G. Ventajas y Desventajas Que Hay Detrás de la Experiencia de Usuario. 2018. Available online: http://cio.com.mx/ventajas-y-desventajas-que-hay-detras-de-la-experiencia-de-usuario/ (accessed on 5 May 2020).
  6. Hernández, E. Métodos y Técnicas de Evaluación de la Usabilidad sin Personas Usuarias. 2019. Available online: https://medium.com/@eliseohdez/métodos-y-técnicas-de-evaluación-de-la-usabilidad-sin-personas-usuarias-e8f7b03c8654 (accessed on 9 November 2019).
  7. Tractinsky, N. The Usability Construct: A Dead End? Hum.-Comput. Interact. 2018, 33, 131–177. [Google Scholar] [CrossRef]
  8. Tan, J.; Gencel, C.; Rönkkö, K. A Framework for Software Usability & User Experience in Mobile Industry. In Proceedings of the 2013 Joint Conference of the 23rd International Workshop on Software Measurement and the 8th International Conference on Software Process and Product Measurement, Ankara, Turkey, 23–26 October 2013; IEEE: Piscataway, NJ, USA; pp. 156–164. [Google Scholar]
  9. Heo, J.; Ham, D.; Park, S.; Song, C.; Yoon, W. A framework for evaluating the usability of mobile phones based on multi-level, hierarchical model of usability factor. Interact. Comput. 2009, 21, 263–275. [Google Scholar] [CrossRef]
  10. Audoux, K.; Segonds, F.; Kerbrat, O.; Aoussat, A. Selection method for multiple performances evaluation during early design stages. Procedia CIRP 2018, 70, 204–210. [Google Scholar] [CrossRef]
  11. Goo, B.; Lee, J.; Seo, S.; Chang, D.; Chung, H. Design of reliability critical system using axiomatic design with FMECA. Int. J. Nav. Archit. Ocean. Eng. 2019, 11, 11–21. [Google Scholar] [CrossRef]
  12. Baquero, L.; Rodríguez, O.; Ciudad, F. Lógica Difusa Basada en la Experiencia del Usuario para Medir la Usabilidad. Rev. Latinoam. Ing. Softw. 2016, 4, 48–54. [Google Scholar]
  13. Borsci, S.; Federici, S.; Malizia, A.; De Filippis, M.L. Shaking the usability tree: Why usability is not a dead end, and a constructive way forward. Behav. Inf. Technol. 2019, 38, 519–532. [Google Scholar] [CrossRef]
  14. Cross, N. Engineering Design Methods: Strategies for Product Design, 5th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2021; p. 224. [Google Scholar]
  15. Shao, J.; Lu, F.; Zeng, C.; Xu, M. Research progress analysis of reliability design method based on axiomatic design theory. Procedia CIRP 2016, 53, 107–112. [Google Scholar] [CrossRef]
  16. Delaram, J.; Fatahi, O. An architectural view to computer integrated manufacturing systems based on Axiomatic Design Theory. Comput. Ind. 2018, 100, 96–114. [Google Scholar] [CrossRef]
  17. Aydoğan, S.; Günay, E.; Akay, D.; Okudan, G. Concept design evaluation by using Z-axiomatic design. Comput. Ind. 2020, 122, 103278. [Google Scholar] [CrossRef]
  18. Wu, X.; Liao, H. Utility-based hybrid fuzzy axiomatic design and its application in supply chain finance decision making with credit risk assessments. Comput. Ind. 2020, 114, 103144. [Google Scholar]
  19. Kulak, O.; Kahraman, C. Fuzzy multi-attribute equipment selection based on information axiom. J. Mater. Process. Technol. 2005, 169, 337–345. [Google Scholar] [CrossRef]
  20. Celik, M.; Kahraman, C.; Cebi, S.; Er, I. Fuzzy axiomatic design-based performance evaluation model for docking facilities in shipbuilding industry: The case of Turkish shipyards. Expert Syst. Appl. 2009, 36, 599–615. [Google Scholar] [CrossRef]
  21. Maldonado, A.; García, J.; Alvarado, A.; Balderrama, C. A hierarchical fuzzy axiomatic design methodology for ergonomic compatibility evaluation of advanced manufacturing technology. Int. J. Adv. Manuf. Technol. 2013, 66, 171–186. [Google Scholar] [CrossRef]
  22. Karatas, M. Hydrogen energy storage method selection using fuzzy axiomatic design and analytic hierarchy process. Int. J. Hydrogen Energy 2020, 45, 16227–16238. [Google Scholar] [CrossRef]
  23. Saaty, T.L. The Modern Science of Multicriteria Decision Making and Its Practical Applications: The AHP/ANP Approach. Oper. Res. 2013, 61, 1101–1118. [Google Scholar] [CrossRef]
  24. Saaty, T. Decision making with the analytic hierarchy process. Int. J. Serv. Sci. 2008, 1, 16. [Google Scholar] [CrossRef]
  25. Maldonado, A.; Alvarado, A.; García, J.; Balderrama, C. Intuitionistic fuzzy TOPSIS for ergonomic compatibility evaluation of advanced manufacturing technology. Int. J. Adv. Manuf. Technol. 2014, 70, 2283–2292. [Google Scholar] [CrossRef]
  26. Awan, U.; Hannola, L.; Tandon, A.; Kumar, R.; Dhir, A. Quantum computing challenges in the software industry. A fuzzy AHP-based approach. Inf. Softw. Technol. 2022, 147, 106896. [Google Scholar] [CrossRef]
  27. Entani, T.; Sugihara, K.; Tanaka, H. Interval Evaluations in DEA and AHP. In Fuzzy Applications in Industrial Engineering; Springer: Berlin/Heidelberg, Germany, 2006; pp. 291–304. [Google Scholar]
  28. Mendoza, A.; Solano, C.; Palencia, D.; Garcia, D. Aplicación del proceso de jerarquía analítica (AHP) para la toma de decisión con juicios de expertos. Ingeniare Rev. Chil. Ing. 2019, 27, 348–360. [Google Scholar] [CrossRef]
  29. Quiroz, J.; García, M. Aplicación de diseño axiomático en el desarrollo de productos escolares con plásticos bio-basados. Acad. J. 2020, 12, 1657–1662. [Google Scholar]
  30. Kulak, O.; Cebi, S.; Kahraman, C. Applications of axiomatic design principles: A literature review. Expert Syst. Appl. 2010, 37, 6705–6717. [Google Scholar] [CrossRef]
  31. Ruvalcaba Coyaso, F.J.; Vermoden, A. Lógica difusa para la toma de decisiones y la selección de personal. Univ. Empresa 2015, 17, 239–256. [Google Scholar] [CrossRef]
  32. Maldonado, A.; Balderrama, C.; Pedrozo, J.; Carcía, J. Diseño Axiomático: Libro de Fundamentos y Aplicaciones; Universidad de la Rioja: Logrono, Spain, 2019; p. 166. [Google Scholar]
  33. Suh, N. The Principles of Design; Oxford University Press: Oxford, UK, 1990; p. 418. [Google Scholar]
  34. Suh, N.; Farid, A. Axiomatic Design in Large Systems: Complex Products, Buildings and Manufacturing Systems; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  35. Taherdoost, H. What Is the Best Response Scale for Survey and Questionnaire Design; Review of Different Lengths of Rating Scale/Attitude Scale/Likert Scale. Int. J. Acad. Res. Manag. (IJARM) 2020, 8, 1–10. [Google Scholar]
  36. Bisquerra, R.; Pérez-Escoda, N. ¿Pueden las escalas Likert aumentar en sensibilidad? REIRE Rev. D’innovació I Recer. En Educ. 2015, 8, 129–147. [Google Scholar]
  37. Nielsen, J. Finding usability problems through heuristic evaluation. In CHI, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems; ACM Press: New York, NY, USA, 1992; Volume 1, pp. 373–380. [Google Scholar]
  38. Nielsen, J.; Landauer, T. A Mathematical Model of the Finding of Usability Problems. INTERCHI 1993, 1, 206–213. [Google Scholar]
  39. Saunila, M.; Nasiri, M.; Ukko, J.; Rantala, T. Smart technologies and corporate sustainability: The mediation effect of corporate sustainability strategy. Comput. Ind. 2019, 108, 178–185. [Google Scholar] [CrossRef]
  40. Suh, N. Axiomatic Design: Advances and Applications; Oxford University Press: Oxford, UK, 2001; p. 503. [Google Scholar]
  41. Calvo, A.; Ortega, S.; Saez, A. Métodos de Evaluación con Usuarios; Universitat Oberta de Catalunya: Barcelona, Spain, 2011. [Google Scholar]
  42. Maghsoodi, A.; Mosavat, M.; Hafezalkotob, A.; Hafezalkotob, A. Hybrid hierarchical fuzzy group decision-making based on information axioms and BWM: Prototype design selection. Comput. Ind. Eng. 2019, 127, 788–804. [Google Scholar] [CrossRef]
  43. Subulan, K.; Baykasoğlu, A. An Improved Extension of Weighted Hierarchical Fuzzy Axiomatic Design; CRC Press: Boca Raton, FL, USA, 2021; p. 31. [Google Scholar]
  44. Gölcük, İ. An interval type-2 fuzzy axiomatic design method: A case study for evaluating blockchain deployment projects in supply chain. Inf. Sci. 2022, 602, 159–183. [Google Scholar] [CrossRef]
  45. Chakraborty, K.; Mondal, S.; Mukherjee, K. Analysis of product design characteristics for remanufacturing using Fuzzy AHP and Axiomatic Design. J. Eng. Des. 2017, 28, 338–368. [Google Scholar] [CrossRef]
  46. Feng, J. Wind farm site selection from the perspective of sustainability: A novel satisfaction degree-based fuzzy axiomatic design approach. Int. J. Energy Res. 2021, 45, 1097. [Google Scholar] [CrossRef]
  47. Utamura, S.; Murase, C.; Hamatani, V.; Nagano, Y. User Experience Index Scale—Quantifying Usability by Magnitude Estimation. Fujitsu Sci. Tech. J. 2009, 45, 219–225. [Google Scholar]
  48. Kim, J.; Han, S. A methodology for developing a usability index of consumer electronic products. Int. J. Ind. Ergon. 2008, 38, 333–345. [Google Scholar] [CrossRef]
  49. Brandy, A.; Mantelet, F.; Aoussat, A.; Pigot, P. Proposal for a new usability index for product design teams and the general public. In Proceedings of the 21st International Conference on Engineering Design (ICED17), Vancouver, BC, Canada, 21–25 August 2017; Volume 8, pp. 199–208. [Google Scholar]
Figure 1. Methodological framework.
Figure 1. Methodological framework.
Computation 12 00130 g001
Figure 2. List of key words on usability in product design.
Figure 2. List of key words on usability in product design.
Computation 12 00130 g002
Figure 3. Attribute clusters obtained from the literature review.
Figure 3. Attribute clusters obtained from the literature review.
Computation 12 00130 g003
Figure 4. AHP matrix: Usability attribute levels in (AHP-based) product design with evaluated weight for electronic products.
Figure 4. AHP matrix: Usability attribute levels in (AHP-based) product design with evaluated weight for electronic products.
Computation 12 00130 g004
Figure 5. Graphical description of the attribute importance according to obtained weights.
Figure 5. Graphical description of the attribute importance according to obtained weights.
Computation 12 00130 g005
Figure 6. Design range tangible and intangible linguistic used terms (Adaptation of Celik et al. [20]).
Figure 6. Design range tangible and intangible linguistic used terms (Adaptation of Celik et al. [20]).
Computation 12 00130 g006
Table 1. First questionnaire’s section aimed to product experts comparing aesthetics versus functionality.
Table 1. First questionnaire’s section aimed to product experts comparing aesthetics versus functionality.
Importance of Comparison between Attributes
Computation 12 00130 i001
Absolutely More ImportantStrongly Most ImportantlyModerately More ImportantWeakly More ImportantEqually ImportantWeakly More ImportantModerately More ImportantStrongly Most ImportantlyAbsolutely More Important
Regarding the usability of the product. How important is functionality versus product quality?
Regarding the usability of the product. How important is the functionality versus the aesthetics of the product?
Regarding the usability of the product. How important is quality versus aesthetics?
Table 2. Description of the products in the case study evaluation.
Table 2. Description of the products in the case study evaluation.
ProductDescription
Computation 12 00130 i002Product A: Slimline Mouse–Wireless
2.4 GHz USB receiver
4 buttons (left, right, cursor movement speed and scroll click)
Scroll
Two AAA batteries
Computation 12 00130 i003Product B: Ergonomic Optical Mouse–Wireless
2.4 GHz USB receiver
3 buttons (left, right and scroll click)
Scroll
Two AAA batteries
Automatic energy saving
Computation 12 00130 i004Product C: Ergonomic Vertical Mouse–Wireless
USB receiver
6 buttons (left, right, up and down in the window, next or back in the window and click to scroll)
Scroll
Power source: USB cable for charging
Table 3. Cronbach’s alpha results (obtained from SPSS).
Table 3. Cronbach’s alpha results (obtained from SPSS).
Cronbach’s Alpha
Questionnaire 1—Importance of attributes—Pairwise comparison0.917
Questionnaire 2—Design Range0.955
Questionnaire 3—System Range: Product A0.929
Questionnaire 3—System Range: Product B0.950
Questionnaire 3—System Range: Product C0.914
Table 4. Aesthetics, quality, and functionality consistency ratio.
Table 4. Aesthetics, quality, and functionality consistency ratio.
AestheticsQualityFunctionality
CI0.180.110.09
RI1.341.1151.57
CR0.120.1010.059
Table 5. Experts’ answers expressed in fuzzy values for aesthetics attributes.
Table 5. Experts’ answers expressed in fuzzy values for aesthetics attributes.
Fuzzy Values
Experts12345
GroupAesthetics
Shape(0.4, 1, 1)(0.6, 1, 1)(0.2, 1, 1)(0.4, 1, 1)(0.4, 1, 1)
Colour(0.2, 1, 1)(0.4, 1, 1)(0, 1, 1)(0.6, 1, 1)(0.2, 1, 1)
Brightness(0.2, 1, 1)(0, 1, 1)(0, 1, 1)(0.6, 1, 1)(0.2, 1, 1)
Texture(0.4, 1, 1)(0.6, 1, 1)(0.2, 1, 1)(0.4, 1, 1)(0.2, 1, 1)
Size(0.4, 1, 1)(0.4, 1, 1)(0.4, 1, 1)(0.2, 1, 1)(0.4, 1, 1)
Appearance(0.4, 1, 1)(0.8, 1, 1)(0, 1, 1)(0.6, 1, 1)(0.2, 1, 1)
Innovation(0.6, 1, 1)(0.8, 1, 1)(0, 1, 1)(0.4, 1, 1)(0.4, 1, 1)
Numbers shaded in green represent the desired minimum fuzzy value the experts gave to obtain the common area.
Table 6. System range defined fuzzy values in product A’s aesthetics.
Table 6. System range defined fuzzy values in product A’s aesthetics.
Assigned Rating of the Evaluation—Product A
AestheticsFuzzy Values
Shape(0.2, 0.29, 0.5)
Colour(0.16, 0.22, 0.46)
Brightness(0.16, 0.22, 0.46)
Texture(0.28, 0.37, 0.58)
Size(0.44, 0.59, 0.74)
Appearance(0.28, 0.37, 0.58)
Innovation(0.16, 0.25, 0.46)
Table 7. Example of the system and common areas for the shape factor in each product.
Table 7. Example of the system and common areas for the shape factor in each product.
Product AProduct BProduct C
Attribute: shapeAttribute: shapeAttribute: shape
Design Range—Red Color: (0.2, 1, 1)Design Range—Red Color: (0.2, 1, 1)Design Range—Red Color: (0.2, 1, 1)
System Range—Green Color: (0.2, 0.29, 0.5)System Range—Green Color: (0.44, 0.59, 0.74)System Range—Green Color: (0.4, 0.55, 0.7)
Computation 12 00130 i005Computation 12 00130 i006Computation 12 00130 i007
System Area—Green Triangle: 0.1484System Area—Green Triangle: 0.149System Area—Green Triangle: 0.1483
Common area—hatching area: 0.042Common area—hatching area: 0.105Common area—hatching area: 0.099
Table 8. TUIC results for the three evaluated products.
Table 8. TUIC results for the three evaluated products.
AttributesCommon AreaSystem AreaUICW of Secondary AttributesUIC of Secondary Attributes
(UIC × Wsecondary)
W of Primary AttributesUIC of Primary Attributes
(UIC of Secondary Attributes × Wprimary)
ABCABCABCA-B-CABCA-B-CABC
A111—Shape0.0940.1160.0970.1460.1450.1430.6390.3200.5640.2110.1350.0680.1190.090.0110.0060.010
A112—Colour0.1140.1140.1210.1390.1390.1350.2820.2850.1570.0400.0110.0110.0060.090.0010.0010.001
A113—Brightness0.1180.1170.0740.1470.1420.0920.3200.2740.3210.0300.0100.0080.0100.090.0010.0010.001
A114—Texture0.0920.1130.1060.1490.1490.1490.6900.4040.4890.0650.0450.0260.0320.090.0040.0020.003
A115—Size0.0880.1110.0650.1480.1470.1430.7490.4081.1460.1090.0820.0440.1250.090.0070.0040.011
A116—Appearance0.1150.1230.1070.1420.1420.1300.3040.2060.2750.2010.0610.0410.0550.090.0050.0040.005
A117—Innovation0.0990.1250.1220.1510.1470.1340.6080.2390.1340.3450.2090.0820.0460.090.0180.0070.004
A121—Finish0.0920.1070.1120.1440.1420.1440.6540.4020.3640.0980.0640.0390.0360.290.0190.0110.010
A122—Material0.0730.1150.1100.1460.1520.1370.9970.4040.3140.1260.1260.0510.0400.290.0370.0150.012
A123—Functions0.1010.1130.1080.1440.1410.1460.5120.3250.4380.4280.2190.1390.1870.290.0640.0410.055
A124—Weather resistance0.0750.0930.0950.1500.1500.1470.9980.6850.6250.1620.1620.1110.1010.290.0470.0320.030
A125—Impact resistance0.0360.0840.1070.1470.1520.1472.0500.8540.4570.1860.3810.1590.0850.290.1110.0460.025
A131—Effectiveness0.1230.1190.0980.2230.2170.2100.8520.8691.1030.1100.0940.0960.1210.620.0590.0600.076
A132—Efficiency0.0840.1270.0530.1940.2270.2131.2160.8412.0130.0780.0950.0660.1580.620.0590.0410.098
A133—Utility0.1160.1180.1010.1390.1390.1450.2560.2420.5190.0870.0220.0210.0450.620.0140.0130.028
A134—Button accessibility0.0970.1060.0980.1460.1370.1450.5880.3710.5600.0620.0370.0230.0350.620.0230.0140.022
A135—Accessibility when using it0.1090.1150.0860.1470.1430.1470.4220.3070.7710.0500.0210.0150.0390.620.0130.0100.024
A136—Accessibility to grasp0.0870.1050.0770.1460.1430.1460.7420.4430.9180.0420.0310.0190.0390.620.0190.0120.024
A137—Performance0.0460.0750.0750.1470.1470.1401.6880.9820.8910.0770.1300.0760.0690.620.0810.0470.043
A138—Intuition0.1230.0750.0730.1360.2020.1540.1511.4351.0720.0580.0090.0830.0620.620.0050.0510.038
A139—Easy to use0.1170.1200.0790.1370.1390.1500.2290.2110.9240.1000.0230.0210.0920.620.0140.0130.057
A1310—Comfort0.0740.1110.0730.1440.1420.1450.9720.3540.9940.0550.0530.0190.0540.620.0330.0120.034
A1311—Security0.0850.1160.0830.1410.1480.1410.7420.3500.7710.1140.0840.0400.0880.620.0520.0250.055
A1312—Interaction0.0900.0940.0390.1500.1390.1470.7370.5691.9130.0610.0450.0350.1170.620.0280.0220.073
A1313—Level of mental load0.0610.0970.0420.2160.1390.2101.8340.5252.3390.0510.0940.0270.1200.620.0590.0170.075
A1314—Level of physical load0.0750.0930.0480.2020.2160.2091.4351.2122.1150.0550.0790.0670.1160.620.0490.0420.073
TUIC0.8340.5480.884
Table 9. Example of the AC, AS, IUC, and maximum TIUC results.
Table 9. Example of the AC, AS, IUC, and maximum TIUC results.
Computation 12 00130 i008Attribute: shape
Common area: 0.0045
System area: 0.15
UIC: 5059
Secondary Attribute W: 0.211
Secondary Attribute UIC: 1067
Primary Attribute W: 0.085
Primary Attribute UIC: 0.091
TUIC-Maximum4.878
Table 10. Proposed seven-item scale to determine the usability level on the basis of an electronic product’s minimum and maximum values.
Table 10. Proposed seven-item scale to determine the usability level on the basis of an electronic product’s minimum and maximum values.
Usability Index Level
LevelTUICUSind
Excellent0–0.6960.8569–1
Very good0.697–1.3920.7142–0.8568
Good1.393–2.0880.5713–0.7140
Regular2.089–2.7840.4285–0.5712
Poor2.785–3.4800.2857–0.4284
Very poor3.481–4.1760.1429–0.2856
Appalling4.177–4.8780–0.1428
NOTEThe lower the information content, the betterThe higher the value of the index, the better (Equation (6))
Table 11. Comparison of (TUIC) indices and usability levels.
Table 11. Comparison of (TUIC) indices and usability levels.
Product AProduct BProduct C
USindUsability Index LevelUSindUsability Index LevelUSindUsability Index Level
0.829Very good0.887Excellent0.819Very good
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Peña-Ontiveros, M.I.; Balderrama-Armendariz, C.O.; Rossa-Sierra, A.; Maldonado-Macias, A.A.; Cortés Sáenz, D.; Hernández Arellano, J.L. Human–Object Interaction: Development of a Usability Index for Product Design Using a Hierarchical Fuzzy Axiomatic Design. Computation 2024, 12, 130. https://doi.org/10.3390/computation12060130

AMA Style

Peña-Ontiveros MI, Balderrama-Armendariz CO, Rossa-Sierra A, Maldonado-Macias AA, Cortés Sáenz D, Hernández Arellano JL. Human–Object Interaction: Development of a Usability Index for Product Design Using a Hierarchical Fuzzy Axiomatic Design. Computation. 2024; 12(6):130. https://doi.org/10.3390/computation12060130

Chicago/Turabian Style

Peña-Ontiveros, Mayra Ivette, Cesar Omar Balderrama-Armendariz, Alberto Rossa-Sierra, Aide Aracely Maldonado-Macias, David Cortés Sáenz, and Juan Luis Hernández Arellano. 2024. "Human–Object Interaction: Development of a Usability Index for Product Design Using a Hierarchical Fuzzy Axiomatic Design" Computation 12, no. 6: 130. https://doi.org/10.3390/computation12060130

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop