*3.2. Color*

For our color palette, we selected six unique hues (red, green, yellow, blue, orange, and purple), based on the color system described by Munsell [64], in which color is composed of hue, value, and chroma. Our proposed palette for this work expresses each of the six hues across five dimensions, saturated, light, dark, warm, and cool. The light and dark dimensions are obtained by adjusting the value in the Munsell color system. A light color has a value of 7 and a chroma of 8, while a dark color has a value of 3 and a chroma of 8. Warm and cool colors follow an approach derived from the RYB color wheel proposed by Issac Newton in Opticks [65]. Warm colors result from the combination of

red-orange, yellow-orange, and blue-green hues. Cool colors result from the combination of red-violet, yellow-green, and blue-violet hues. Following this scheme, our proposed palette can express 30 colors across 6 hues and 5 dimensions and is shown in Figure 1.

### *3.3. Multi-Sensory Color Code*

Our proposed multi-sensory color code system is composed of two components. A sound color code component (Section 3.3.1) that maps the auditory channel in the form of melodies to the six different hues shown in Figure 1. In addition, a scent color code component (Section 3.3.2) maps the olfactory channel through different scents to the five dimensions shown in Figure 1. To determine a specific color of the palette, the user must listen to the melodies to identify the hue while matching a scent to identify the saturated, light, dark, warm, or cool dimension of the color. In the following sections, we elaborate on each of the components.

**Figure 1.** The 30-color palette used for the multi-sensory color code experimentation. The six hues on the left of the figure are described through the saturated, light, dark, warm, and cool dimensions. The saturated, light, and dark dimension are based on the BCP-37 [66].

### 3.3.1. Sound Color Code Component

The first component in the design of our multi-sensory color code is the sound color code component. While there are many different sound color codes based on several properties of sound, such as pitch and tempo, we use the VIVALDI sound color code previously developed by Cho et al. [33] as the sound color component of our multi-sensory color code. The VIVALDI color code was also designed by decomposing a color into hue and the saturated, light, and dark color dimensions. VIVALDI expresses six different color hues: red, orange, yellow, green, blue, and purple using several musical instruments. Red and orange are represented by string instruments (violin + cello and guitar); yellow and green by brass instruments (trumpet + trombone and clarinet + bassoon); blue and purple by percussion instruments (piano and organ). The pairing between the color and musical instruments was made following the correspondence between the instruments' timbre and the color hue. To express the saturated, light, and dark color dimensions, VIVALDI uses a different set of pitches for each dimension and fragments of Vivaldi's Four Seasons

Spring, Autumn, and Summer, respectively. Regarding the pitch, the saturated dimension is represented using an A major chord (medium pitch), the light dimension using F major (high-pitch), and Dark using E minor (low-pitch). The marked difference in pitch helps the listener to identify the color dimension variations more easily. The complete sound color code is resumed in Table 6.



Sound files are provided in the Supplementary Material.

### 3.3.2. Scent Color Code Component

In this work, we propose that using a sound and scent-based multi-sensory approach to design a color code will ease the effort to learn the code and improve the identification of the encoded colors. The previous section centered on the definition of the sound component; this section focuses on the scent component. Scent can be used to either identify the hue or the color dimension that compose a specific color. We decided to use scent only to identify the color dimension because it is already simple to identify the hue using sound (it is just necessary to identify the type of instrument, as seen in Table 6. In addition, it reduces the number of scents required to represent the complete palette. The color palette for the multi-sensory color code has five color dimensions: saturated, light, dark, warm, and cool. Since the saturated color dimension is the neutral shade of the color, we determined that we could represent the saturated dimension by either not using any smell or using a distinctive neutral one, like coffee. For the remaining light-dark and warm-cool color dimension pairs, the next step is to find a scent for each dimension. Instead of pairing each one to an arbitrary scent, we decided to select the scents based on their correspondence to the color dimensions. This can improve and establish the user's mental association between scents and colors.

For the color dimension-scent pair matching, instead of making an arbitrary match, we performed a semantic differential survey based on adjectives to exclude the participants' biases as much as possible. This kind of survey is valid for visually impaired and nonvisually impaired people. For example, the latter tend to match scents to hue as follows: lemon-yellow, apple-red, and cinnamon-brown. Even though blind people have never seen colors, they also similarly match colors because of their education and social interaction with non-visually impaired people. During the survey, the participants smell different scents and choose which semantic adjectives relate most to that particular scent. Using

those adjectives, we match them with the color dimension. Making the color dimensionscent pairs based on the survey results provides a more empathic solution for blind people than simply connecting colors and scents directly.

We use the semantic differential survey proposed by Reference [67] to determine the scent-color dimension correspondence. We establish the association between the semantic adjectives in Table 7 and each of the candidate scents in Table 8 through the survey. The selection of semantic adjective in Table 7 and their association with the light-dark and warm-cool pairs was made through a literature survey. The candidate scents in Table 8 were selected from the scent-color correspondence literature discussed in Section 2.2. Therefore, using the results of the survey, we determine the association between scents and color dimension through the semantic adjectives. The survey was performed following the next steps:


**Table 7.** Semantic adjectives for scent and color dimensions (light-dark and warm-cool) mediation.


**Table 8.** Olfactory stimuli used during our implicit association test.


Following the implicit test procedure described before, we performed a test with the help of eighteen participants (average of 24.7 years with a standard deviation of 2.9, eight are women (44%), and ten are men (56%)) to identify the best four scents to pair to the light, dark, warm, and cool dimensions. The final candidate list after processing the test results is shown in Table 10. The final scent-color dimension pair is as follows, apple is matched with the light dimension, chocolate with the dark dimension, lemon to the warm dimension, and caramel to the cool dimension.


**Table 9.** Semantic adjectives association test example.

The directivity of a scent is calculated by aggregating the evaluations for each of the semantic adjectives of the test users grouped by dimension and choosing the dimension with the largest magnitude.

**Table 10.** Scent candidates for scent—color dimension correspondence.


\* Selected candidate to represent the corresponding color dimension. Each cell describes the scent candidate and directivity magnitude.

#### *3.4. Multi-Sensory Color Code Based Visual Sensory Substitution Prototype*

To study the feasibility of our proposed multi-sensory color code to experience color contents, we decided to integrate it into a sensory substitution device prototype. The prototype is based on and extends our integrated multimodal guide platform described in Reference [13]. It uses a combination of tactile, audio, and olfactory modalities to communicate the color contents of an artwork. For artwork tactile exploration, the prototype presents the user with a 2.5D relief model of the artwork. The relief model is 3D printed and embedded with touch capacitive sensors on the model surface using conductive ink spread over the main features in the artwork to create interactive areas. A colored paint coat is then

applied. The surface of the model can be explored by touch to determine the features of the artwork. The embedded sensors are connected to a control board composed of an Arduino Uno microcontroller (Arduino, Somerville, MA, USA), a WAV Trigger polyphonic audio player board (SparkFun Electronics, Boulder, CO, USA), an MPR121 proximity capacitive touch sensor controller (Adafruit Industries, New York, NY, USA), and four water atomization modules(Seeed, Shenzhen, China). Touch-capacitive data sensed during the user's tactile exploration is handled through the MPR121 controller, which communicates touch and releases events to the microcontroller through an I2C interface. The microcontroller processes the events to determine the user's touch gesture (touch, tap, and double-tap). Depending on the gesture, the microcontroller can send commands through its UART port to the audio board to play audio tracks. The microcontroller can also send digital signals through its general purpose IO ports to the atomization modules to produce fine water mist. The four atomization modules are installed in small bottles containing a dilution of scent and water. The modules are housed in an enclosure and placed above the relief model installed on a vertical exhibitor, as shown in Figure 2b.

Interaction with the prototype is simple. The user only needs to approach the prototype, wear a pair of headphones located at the side, and touch the surface to start an exploration session. While the original platform described in Reference [13] had several functions, like audio explanations and sound effects of the artwork features, these were disabled to avoid bias during the evaluation. For this prototype, the user can freely explore the artwork by touch to identify the features and double-tap in any of them to trigger the audio and olfactory stimuli. The audio and olfactory stimuli are based on the proposed multi-sensory color code. The audio files reproduced correspond to the audio tracks in Table 6 and the olfactory stimuli to the scents in Table 10. The user can then hear and smell the stimuli to determine the color of the double-tapped feature.
