*5.2. Results*

These final scores can aid us in finding out which VIVALDI, CLASSIC and temperature cues are the ones that express the bright, dark, warm, and cold dimensions of colors the best. For finding that out, it is necessary to select the sound and temperature cues that scored the highest for each dimension. Table 15 presents the cues that score the highest for each dimension and method and the name of that cue.


**Table 15.** Highest scores attained by each method for each color dimension. The name of the cue which attained that highest score can be seen inside the parenthesis.

> Several interesting points can be observed from these results:


Taking into consideration these observations, it was possible to reach several conclusions. First of all, all the cues that expressed most accurately the different dimensions of color were clearly defined. Additionally, it is clear that, for expressing color dimensions, VIVALDI sounds were better than CLASSIC sounds since, except for one dimension, the scores reached by VIVALDI sound cues were higher. In addition, it was better to use temperatures as a way of expressing color hues and sounds to express color dimensions, since, as observed above, the warmest and the coldest temperature conveyed the best two dimensions each, but it was convenient for the user that each dimension was represented by one and only one cue. Therefore, temperature was not a good modality for representing color dimensions.

It can be concluded that the best multi-sensory algorithm was one where the temperature cues expressed color hue and VIVALDI sounds expressed the other color dimensions. In other words, the best multi-sensory method was the temperature–sound–color coding method presented in Table 4, which is shown now in its final complete state in Table 16. The algorithm of Table 16 is the temperature–sound–color designed through the tests and results that were observed.


**Table 16.** Final temperature–sound–color coding.

### *5.3. Final Temperature–Sound Coding Multimodal Test and Results*

For assessing the final temperature–sound multimodal coding that was designed (Table 16, a final multimodal test with the final system was performed. The number of participants was 12. They were college students who had normal eyesight and an average age of 22 years. The test sessions included an explanation of the test and the method, a short training time so the user could familiarize himself/herself with the different temperatures and sounds, and a final test in which the user had to guess which color the multimodal system was representing through its sound and temperature. Test duration lasted around 25 min per person. The testing procedure was the following:


The accuracy during the test can be seen in Table 17, and a list with all the wrong and correct answers, divided into hue (represented through temperature) and dimension (represented through sound) during the test can be seen in Table 18. Additionally, in Tables 19 and 20, confusion matrices for both the hue and the dimension of the color (each one presented to the user as a temperature cue and as a sound, respectively) are presented. The total accuracy was 67.5%. Considering only the hue of the colors, the accuracy went up a little bit to 71.6%. On the other hand, the accuracy of guessing the dimension of the color through sound reached up to 92.5%. While 67.5% might not seem too high, it is important to consider the limited training time the users were given with. With a longer training time, the accuracy would likely increase. In addition, previous work [5] seems to sugges<sup>t</sup> that the capacity for visually impaired people to discern temperatures might be higher than the one reflected here (with sighted users). On the other hand, the confusion matrices show clearly that the main recognition problem is caused by the colors yellow and orange, whose temperature cues (30 ◦C and 34 ◦C) are not easily discernible. Therefore, the temperatures cues for orange and yellow, and that particular temperature range of (30 ◦C, 34 ◦C) are things that will have to be modified and improved in the future in order to improve the system. Overall, the results are promising and the system seems to have the potential to be developed and improved on future iterations.

**Table 17.** Accuracy when discerning a color correctly (both hue and dimension) and same value when only taking hue or dimension into account.



**Table 18.** Final multimodal system test results for 12 users. "H" means the answer related to hue, "D" the one related to the dimension. As a result, if for the case of Dark Red a user answered Dark Blue, the "H" answer would be wrong while the "D" answer would be correct. "Tot\_D" gives the total number of correct answers per user per dimension. "Tot" gives the total correct answers per user considering both hue and dimension together.

**Table 19.** Confusion matrix for the six color hues (presented through temperature cues). The main recognition problem is caused by the colors yellow and orange, whose temperature cues (30 ◦C and 34 ◦C) are not easily discernible.


**Table 20.** Confusion matrix for the four dimensions of the colors (presented through sounds).

