**1. Introduction**

Around 1.3 billion people worldwide live with some form of blindness, and their limited access to artwork cannot be ignored in a world of increasing inclusion. Museums are obligated to accommodate people with varying needs, including people with visual impairments [1]. Art is arguably one of the most intriguing creations of humanity, and as such should be available to every person; accordingly, making visual art available to the visually impaired has become a priority. However, for the visually impaired, visiting a museum can feel alienating.

Therefore, it is necessary to expand research on universal exhibition art appreciation assistance, exhibition contents, and art exhibition environment for the visually impaired. In other words, the development of technology to interpret the context of artwork using non-visual sensory forms such as sound, color, texture, and temperature is positive, and through this, the visually impaired can open a new way to enjoy art and culture in social and psychological aspects. For this reason, many studies have been conducted.

However, the use of vision and sound for interaction has dominated the field of human–computer interaction for decades, even though humans have many more senses for perceiving and interacting with the world. Recently, researchers have started trying to capitalize on touch, taste, and smell when designing interactive tasks, especially in gaming, multimedia, and art environments. The concept of multimodality, or communicating information by means of several sensations, has been of vital importance in the field of human–computer interaction. More innovative approaches can be used, such as multisensory displays that appeal to sight, hearing, touch, and smell [2]. The combination of the strengths of several interfaces allows for a more efficient user–machine communication, which cannot be accomplished by means of a single interaction mode alone. The combination of the strengths of various modes can make up for the lack of sense of vision for the visually impaired. One way to cultivate social, cognitive, and emotional empathy is to appreciate artworks through multiple senses (sight, hearing, touch, smell, etc.) [3].

**Citation:** Cho, J.D. A Study of Multi-Sensory Experience and Color Recognition in Visual Arts Appreciation of People with Visual Impairment. *Electronics* **2021**, *10*, 470. https://doi.org/10.3390/ electronics10040470

Academic Editor: Jun Liu and Osvaldo Gervasi

Received: 19 December 2020 Accepted: 9 February 2021 Published: 15 February 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

Based on such thoughts, multiple senses can work together to increase the experience of the visually impaired allowing indirect sensing of the colors and images of the exhibits through media such as sound, texture, temperature, and scent. These technologies not only help the visually impaired enjoy the museum experience, but also allow sighted people to view museum exhibits in a new way.

Museums are evolving to provide enjoyable experiences for everyone, moving beyond audio guides to tactile exhibitions [4,5]. A previous study [6,7] reviewed the extent and nature of participatory research and accessibility in the context of assistive technologies developed for use in museums by people with sensory impairments or learning disabilities. Some museums have successfully produced art replicas that can be tactilely experienced. For example, the Metropolitan Museum of Art in New York has displayed replicas of the artworks exhibited in the museum [8,9]. The American Foundation for the Blind offered guidelines and resources for the use of tactile graphics for the specific case of artworks [10]. The Art Institute of Chicago also uses 3D-printed copies of its collection to support its curriculum for design students. Converting artworks into 2.5D or 3D allows the visually impaired to enjoy them using touch, with audio descriptions and sound effects provided to enhance the experience. In 2.5D printing, a relief model, a tactile diagram of a computer-edited drawing, is printed onto microcapsule paper called swell paper that enables the visually impaired to easily distinguish the texture and thickness of lines [10]. Bas-relief tactile painting is a sculptural technique that produces specific shapes that protrude on a plane [10]. The quality of the relief is measured using the perceived quality of the represented 3D shape. Three-dimensional (3D)-printed artworks are effective learning tools to allow people to experience objects from various perspectives, improving the accessibility of art appreciation and visual descriptive skills of the visually impaired by providing an interactive learning environment [11]. Such 3D printing technology improves access to art by allowing the visually impaired to touch and imagine an artwork. For example, the Belvedere Museum in Vienna used 3D printing technology to create a 3D version of Gustav Klimt's "The Kiss" [12] and the Andy Warhol Museum [13] released a comprehensive audio guide that allows visitors to touch 3D replicas of artwork during an audio tour.

Furthermore, colors should not be forgotten, because they retain a symbolic meaning, even for children without sight. Color is an absolute element that gives depth, form, and motion to a painting. Colors are expressed in such a way that different feelings can pop out of objects. Layers of color can provide an infinite variety of sensory feelings and show multi-layered diversity, liberating objects from ideas. According to the perception theorem, viewers give meaning to a work according to their experiences, and therefore, color is not an objective attribute, but a matter of perception that exists in the mind of the perceiver. Therefore, this review also attempts to convey color to the visually impaired through multiple sensory elements.

This review is organized as follows. In Section 2, some examples of multisensory art reproduction in museums and the museum's multi-sensory experiences of touch, smell, and hearing will be addressed. In Section 3, we look at how to express colors through sound, pictograms, and temperature. Section 4 will be dedicated to non-visual multi-sensory integration. Finally, conclusions will be drawn in Section 5.

### **2. Multi-Sensory Experiences in Museums**

Multi-sensory interaction aids learning, inclusion, and collaboration, because it accommodates the diverse cognitive and perceptual needs of the visually impaired. Multiple sensory systems are needed to successfully convey artistic images. However, few studies have analyzed the application of assistive technologies in multisensory exhibit designs and related them to visitors' experiences. The museums of providing multiple senses that are possible in the method of delivering art works appreciation considering the visually impaired are Birmingham Museum of Art, Cummer Museum of Art and Garden, Finnish National Gallery, The Jewish Museum, Metropolitan Museum of Art, Omero Museum,

Museum of Fine Art, Museum of Modern Art, Cooper Hewitt Smithsonian Design Museum, The National Gallery, Philadelphia Museum of Art, Queens Museum of Art, Tate Modern, Smithsonian American Art Museum, and van Abbe Museum. They operate a variety of tours and programs to allow the visually impaired to experience art. The monthly tour provides an opportunity to touch the exhibits by providing sensory explanations and tactile aids through audio. There are also braille printers, 3D printers, voice information technology, tactile image-to-speech technology, color change technology, etc. in the form of helping the audience to transform the work or appreciating the exhibition by carrying auxiliary tools.

The "Eyes of the Mind" series at the Guggenheim Museum in New York also offered a "sensory experience workshop" for museum visitors with visual impairment or low vision. In addition to describing the artworks, it used the senses of touch and smell. In the visually impaired, the sense of touch can stimulate neurons that are usually reserved for vision. Neuroscience suggests that, with the right tools, the visually impaired can appreciate the visual arts, because the essence of a picture is not vision but a meaningful connection between the artist and the audience.

### *2.1. Tactile Perception of the Visually Impaired*

The sense of touch is an important source of information when sight is absent. According to many studies, tactile spatial acuity is enhanced in blindness. Already in 1964, scientists demonstrated that seven days of visual deprivation resulted in tactile acuity enhancement. There are two competing hypotheses on how blindness improves the sense of touch. According to the tactile experience hypothesis, reliance on the sense of touch drives tactile-acuity enhancement. The visual deprivation hypothesis, on the other hand, posits that the absence of vision itself drives tactile-acuity enhancement. Wong et al. [14] tested the participants' ability to discern the orientations of grooved surfaces applied to the distal pads of the stationary index, middle, and ring fingers of each hand, and then to the two sides of the lower lip. A study comparing those hypotheses demonstrated that proficient Braille readers—those who spend hours a day reading with their fingertips—have much more sensitive fingers than sighted people, confirming the tactile experience hypothesis. In contrast, blind and sighted participants performed equivalently on the lips. If the visual deprivation hypothesis were true, blind participants would outperform sighted people in all body areas [14].

Heller [15] reported two experiments on the contribution of visual experience to tactile perception. In the first experiment, sighted, congenitally blind, and late blind individuals made tactual matches to tangible embossed shapes. In the second experiment, the same subjects attempted tactile identification of raised-line drawings. The three groups did not differ in the accuracy of their shape matching, but both groups of blind subjects were much faster than the sighted. Late (acquired) blind observers were far better than the sighted or congenitally blind participants at tactile picture identification. Four of the twelve pictures were correctly identified by most of the late blind subjects. The sighted and congenitally blind participants performed at comparable levels in picture naming. There was no evidence that visual experience alone aided the sighted in the tactile task under investigation, because they performed no better than the early blind. The superiority of the late blind suggests that visual exposure to drawings and the rules of pictorial representation could help in tactile picture identification when combined with a history of tactual experience [15].

### *2.2. Tactile Graphics and Overlays*

Tactile graphics are made using raised lines and textures to convey drawings and images by touch. Advances in low-cost prototyping and 3D printing technologies aim to tackle the complexity of expressing complex images without exploration obstruction by adding interactivity to tactile graphics. Thus, the combination of tactile graphics, interactive interfaces, and audio descriptions can improve accessibility and understanding of visual art

works for the visually impaired. Taylor et al. [16] presented a gesture-controlled interactive audio guide based on low-cost depth cameras that can track hand gestures on relief surfaces during tactile exploration of artworks. Conductive filament was used to provide touchscreen overlays. LucentMaps developed by Götzelmann et al. [17] uses 3D-printed tactile maps with embedded capacitive material that, when overlaid on a touchscreen device, can generate audio in response to touch. They also provided the results of a survey done with 19 visually impaired participants to identify their previous experiences, motivations, and accessibility challenges in museums. An interactive multimodal guide uses both touch and audio to take advantage of the strengths of each mode and provide localized verbal descriptions.

While mobile screen readers have improved access to touchscreen devices for people with visual impairments, graphical forms of information such as maps, charts, and images are still difficult to convey and understand. The Talking Tactile Tablet developed by Landau et al. [18] allows users to place tactile sheets on top of a tablet that can then sense a user's touches. The Talking Tactile Tablet holds tactile graphic sheets motionless against a highresolution touch-sensitive surface. A user's finger pressure is transmitted through a variety of flexible tactile graphic overlays to this surface, which is a standard hardened-glass touch screen, typically used in conjunction with a video monitor for ATMs and other applications. The computer interprets the user's presses on the tactile graphic overlay sheet in the same way that it does when a mouse is clicked while the cursor is over a particular region, icon, or object on a video screen [18]. Table 1 summarizes a list of these projects and their interaction technologies.


**Table 1.** Interactive Tactile Graphics and Multimodal Guide for educations and map explorations.

### *2.3. Interactive Tactile Graphics and 2.5D Models*

In the last decades, researchers have explored the improvement of tactile graphics accessibility by adding interactivity through diverse technologies. Despite the availability of many research findings on tactile graphics and audio guides focused on map exploration and STEM education, as described earlier, visually impaired people still struggle to experience and understand visual art. Artists are still more concerned with accessibility of reasoning, interpretation, and experience than providing access to visual information. The visually impaired wish to be able to explore art by themselves at their own pace. With these in mind, artists and designers can change the creative process to make their work more inclusive. The San Diego Museum of Art Talking Tactile Exhibit Panel [24] allows visitors to touch Juan Sánchez Cotán's master still-life, *Quince, Cabbage, Melon, and Cucumber*, painted in Toledo, Spain in 1602 [25]. If you touch one of these panels with your bare hands or wearing light gloves, you can hear information about the touched part. This is like tapping on an iPad to make something happen; but instead of a smooth, flat touch screen, these exhibit panels can include textures, bas-relief, raised lines and other tactile surface treatments. As you poke, pinch or prod the surface, the location and pressure of your finger-touches are sensed, triggering audio description about the part that was touched [25].

Volpe et al. [26] explored semi-automatic generation of 3D models from digital images of paintings, and classified four classes of 2.5D models (tactile outline, textured tactile, flat-layered bas-relief, and bas-relief) for visual artwork representation. An evaluation with 14 blind participants indicated that audio guides are required to make the models understandable. Holloway et al. [27] evaluated three techniques for visual artwork representation: tactile graphics, 3D printing (sculpture model), and laser cut. Among them 3D printing and laser cut were preferred by most participants to explore visual artworks. There are projects that add interactivity to visual artwork representations and museum objects. Anagnostakis et al. [28] used proximity and touch sensors to provide voice guidance on museum exhibits through mobile devices. Reichinger et al. [29,30] introduced the concept of a gesture-controlled interactive audio guide for visual artworks that uses depth-sensing cameras to sense the location and gestures of the user's hands during tactile exploration of a bas-relief artwork model. The guide provides location-dependent audio descriptions based on user hand positions and gestures. Vaz et al. [31] developed an accessible geological sample exhibitor that reproduces audio descriptions of the samples when picked up. The on-site use evaluation revealed that blind and visually impaired people felt more motivated and improved their mental conceptualization. D'Agnano et al. [32] developed a smart ring that allows to navigate any 3D surface with fingertips and ge<sup>t</sup> in return an audio content that is relevant in relation to the part of the surface while touching in that moment. The system is made of three elements: a high-tech ring, a tactile surface tagged with NFC sensors, and an app for tablet or smartphone. The ring detects and reads the NFC tags and communicates in wireless mode with the smart device. During the tactile navigation of the surface, when the finger reaches a hotspot, the ring identifies the NFC tag and activates, through the app, the audio track that is related to that specific hotspot. Thus, a relevant audio content relates to each hotspot.

Quero et al. [33–35] designed and implemented an interactive multimodal guide prototype based on the needs found through our preliminary user study [33] and inspired mainly in the related works Holloway et al. [27] and Reichinger et al. [30]. Table 2 compares the main technical differences among the related works and their approach. The prototype identifies tactile gestures that trigger audio descriptions and sounds during exploration of a 2.5D tactile representation of the artwork placed on top of the prototype. The body of work on interactive multimodal guide focused on artwork exploration is summarized in Table 2.


**Table 2.** Interactive multimodal guide for appreciating visual artwork and museum objects.

#### *2.4. An Example of Interactive Multimodal Guide for Appreciating Visual Artwork*

Cavazos et al. [33–35] developed an interactive multimodal guide that transformed an existing flat painting into a 2.5D (relief form) using 3D printing technology that used touch, audio description, and sound to provide a high level of user experience. Thus, the visually impaired can enjoy it freely, independently, and comfortably through touch to feel the artwork shapes and textures and to listen and explore the explanation of objects of their interest without the need for a professional curator. The interactive multimodal guide [35] complies with the following processes: (1) Create a 2.5D (relief) model of a painting using image processing and 3D printing technologies (Figure 1); (2) 3D-print the model. (3) use conductive paint (Figure 1) applied to objects in the artwork to create the touch sensors such that a microcontroller can detect touch gestures that trigger audio responses with different layers of information about the painting; (4) add color layers to the model to the replicate original work; (5) place the interactive model into an exhibition stand; (6) connect the model to a control board (Arduino and capacitive sensor MPR121); (7) connect headphones to the control board; (8) touch to use; (9) engage in independent tactile exploration while listening to mood-setting background music; (10) tap anywhere on the artwork to listen to localized information, such as the name of an object, its color or shape, its meaning, and so on; (11) double tap anywhere on the artwork to listen to localized audio, such as the sound of leaves on a tree in autumn or the noise of a rural town at night; (12) touch a physical button to hear a recorded track containing use instructions and general information about the artwork, such as the painter's historical and social context, which is an essential part of understanding any work.

(**a**) Paul Cezanne "Compotier, Glass, and Apples", 1880 (Left: Painted with normal color; Right: Conductive paint coated) 

(**b**) Wookjin Jang (Korean Artist) "Magpie", 1986 (Left: Painted with normal color; Right: Conductive paint coated) 

**Figure 1.** Tactile 3-D printed reproductions in 2.5D model of works (courtesy of Luis Cavazos Quero, Ph.D. student, Dept. of Electronic, Electronic, and Computer Engineering, SungkyunKwan University).

In BlindTouch project [35,36], gamification [37] is included to awaken other senses and maximize enjoyment of the artwork. Through vivid visual descriptions, including sound effects, viewers can maximize their sense of immersion in the space of the painting. Each artwork was reproduced using materials that can recognize the timing of tactile input, so that when a person taps part of the artwork with a fingertip once, they can hear audio description about that part; if they tap twice, they can hear a sound effect about that part. It directly informs the sound of objects expressed in the work of art and informs viewers of what is depicted in the work with natural sound at the same time, while the emotional side is transferred to the background music with feelings similar to the emotion of the work, taking into consideration the musical instrument's timbre, minor/major, tempo, and pitch. Two-dimensional speaker placement provides more detailed information on key objects such as perspective and directionality so that the visually impaired can maintain the direction of sound through hearing to awaken the real sense of space and to recognize sensory information about their direction. Furthermore, the voice interactive multimodal guide prototype developed by Bartolome et al. [34] identifies tactile gestures and voice commands that trigger audio descriptions and sounds while a person with visual impairment explores a 2.5D tactile representation of the artwork placed on the top surface of the prototype. The prototype is easy and intuitive, allowing users to access only the information they want, reducing user fatigue.

As a preliminary study, BlindTouch [36] was intended to represent various visual elements in the work such as ambient sounds that reflect periodic, seasonal, temporal, and regional information about the work as realistically as possible. In that first study, the auditory interaction was applied to Vincent van Gogh's 1889 work "The Starry Night". When users touch the BlindTouch painting three times, they hear a sound representing the starlight and the sound of a tall cypress swaying in the wind. The sound of the wind was played through two speakers to express the swirling movement of the wind, and the moonlight and starlight in the sky at the top of the work were expressed as a twinkling ringtone. The sounds of shaking leaves and grass bugs on a summer night were added. To those sounds, background music was added with an atmosphere similar to the emotions inspired by "The Starry Night". To express the warmth coming from the village, an oboe played a major scale, and a slightly fast, lyrical melody in the high pitch range of the piano provided a cold feeling of dawn. The completed exhibition environment used six-channel speakers arranged on flat plates, and the wind sounds were swirled between two speakers to arouse a sense of space and enhance appreciation of the artwork through a sense of three-dimensional sound. The blind user who experienced the exhibited work left the following words. "I'm so happy that I can now tell my friends that I understand Starry Night better through the blind touch. Thank you for making art more enjoyable. Especially when I'm older, it's so interesting because I can remember it in a different way."

BlindTouch works were exhibited for three weeks at St. Mary's School (a special school for the visually impaired operated by a 65-year-old Catholic institution located in Cheongju, Korea) for three weeks from October 12 to 30, 2020 [36]. A student, Geon Tak (Figure 2), who especially liked "The Starry Night", looks very satisfied.

**Figure 2.** A student viewing the "Starry Night" by Vincent van Gogh reproduced in 2.5D at the BlindTouch exhibition (Cheongju St. Mary's School) [36].

Hye-ryeon Jeon, a school art teacher who participated in this exhibition, left the following word. "This BlindTouch study is amazing, especially by conducting research focused on multi-sensory color coding, this barren realm that no one cares about. It seems that visually impaired people will enjoy the richness of life with a very delicate study on appreciation of the artworks." Participants in the experiment responded to the sound that expressed the wind in the work, "It feels like the wind is fighting with each other"; "The sound is played from side to side to express the feeling of wind blowing"; "There is a lot of wind and it feels cool and cold in the air." They replied that the applied wind sound was similar to the actual wind blowing and helped to remind them of it. In addition, the use of two speakers to effectively express the swirling wind in the work received positive reviews from participants. Additionally, while listening to the ambient sound of the stars in the sky and the moon in the work, the participants of the experiment recalled the stars, saying, "I feel like a sparkling in a quiet place" and "There is a sound of something shining." However, there was an opinion that "when I first heard it, I didn't know what it was, but later I knew that it was the sound of a star," and the sound was artificial and the expression was insufficient to be reminiscent of a starry night sky. The participant in the experiment who heard the sound of the cypress tree on the left side of the work replied, "The landscape of grass bugs are crying and the tree next to it is swaying in the wind," and responded that it was a lonely atmosphere. He said that it helped to remind him of trees.

In addition, interviews were conducted with sighted people after appreciating the work through sight and hearing. There were four participants in the experiment, two males and two females in their 20s. The average age was 22.25 years (SD = 1.92). There was an opinion that the ambient sounds of various objects in the work were reproduced, and that they aroused interest, and the ability to appreciate the interaction through multiple senses rather than a single sense led to a positive evaluation. When the background and appropriate sound of the work were applied to a work of art, it helped both the visually impaired and sighted people to appreciate the work, and users said that it is possible to imagine the appearance, space, and situation of the work and induce a deeper atmosphere and sensibility in it. Participants believed that appreciating works of art using multiple senses can communicate deeply and provide a rich aesthetic experience. To understand how educators perceived tactile art books and/or 3D printed replicas as a new experience for children with visual impairments, their interactive experiences was evaluated with those children. In this study, a high level of participation was observed from both teachers and children. They admitted that they had not experienced any attempts to include multisensory interactions. The visually impaired students enjoyed the BlindTouch works displayed under the guidance of art teachers and returned to the art room to express their feelings without hesitation. When the teacher talked about the atmosphere of the paintings that the students completed by themselves, various reactions emerged.

Here are the works of three students who participated in the exhibition and art classes. The basic information for the students who participated in the art activities is shown in Table 3. Drawings using paints can be difficult for visually impaired students, but teachers tried to induce pictorial expressions from visually impaired students by using wheat flour paste. Students created works with a similar arrangemen<sup>t</sup> and composition to the works they enjoyed, because their appreciation of the exhibition works was tactile, and detailed information on the objects in those works could thus be obtained. The students heard a story about Vincent van Gogh's life and the characteristics of his paintings, which let them express their appreciation in flour paste and paints as vivid as Vincent van Gogh's brushstrokes. The work below expresses the feeling of appreciation for "The Starry Night" in paints and clay. The artworks use various expressions of the material in "The Starry Night" to express the students' experiences of touch and hearing with the BlindTouch exhibition. The visually impaired students touched the objects in the exhibits with their hands, and they received auditory information. Then they used their memories to express their feelings. The three students who participated in the BlindTouch exhibition were actively stimulated to express their emotions through the multi-sensory exhibition experience, and during the art class activity, their emotions and feelings toward the subject exhibition became abundant, giving them an opportunity to naturally express their feelings.


**Table 3.** Students' works that reflects personal impressions of the BlindTouch exhibit [36].

### *2.5. Immersive Interaction through Haptic Feedback*

Tactile feedback can be classified into contact tactile and non-contact tactile. The sunburn, snow, wind, and sensation of heat and humidity can be contact or non-contact. Haptic experiences for improving immersive interaction through haptic feedback are diverse and complex, and humans can perceive a variety of tactile sensations, including the kinematic sensations of objects and skin feedback when users manipulate them.

Only few assistive technologies rely on tangible interaction (e.g., the use of physical objects to interact with digital information [38]). For instance, McGookin et al. [39] used tangible interaction for the construction of graphical diagrams: non-figurative tangibles were tracked to construct graphs on a grid, using audio cues. Manshad et al. [40] proposed audio and haptic tangibles for the creation of line graphs. Pielot et al. [41] used a toy duck to explore an auditory map.

As digital interaction tools for introducing museum exhibits, Petrelli et al. [42] introduced "Museum Mobile App", "Touchable Replicas", and "NFC Smart Cards with Art Drawings". Here, when the replica (tangible) is placed on the NFC reader on the exhibition table, an introduction to the work is played on the multimedia screen. The NFC smart card on which the artwork is drawn works likewise. As a result of surveying visitor preferences for these three, it was found that they most preferred the use of replicas and smart cards. Among the three modes, the proportion of participants who did not prefer to use mobile apps was the highest. It is very noteworthy that this is because it interferes with the enjoyment of participating in the exhibition (55%, N = 31) [42].

Information is typically integrated across sensory modalities when the sensory inputs share certain common features. Cross-modality refers to the interaction between two different sensory channels. Although many studies have been conducted on cross sensation between sight and other senses, there are not many studies on cross sensation between nonvisual senses [43]. The Haptic Wave [44] allows audio engineers with visual impairments to "feel" the amplitude of sound, gaining salient information that sighted engineers ge<sup>t</sup> through visual waveforms. If cross-modal mapping allows us to substitute one sensory modality for another, we could map the visual aspects of digital audio editing to another sensory modality. For example, if visual waveform displays allow sighted users to "see the sound", we could build an alternative interface for visually impaired users to "feel the sound". The demo will allow visitors, sighted or visually-impaired, to sweep backwards and forwards through audio recordings (snippets of pop songs and voice recordings), feeling sound amplitude through haptic feedback delivered by a motorized fader [44]. Gardner et al. [45] developed a waist belt with built-in sound, temperature, and vibration patterns to provide a multisensory experience of specific artworks.

Brule et al. [19] created a raised-line overlaying multisensory interactive map on a capacitive projected touch screen for visually impaired children after a five-week field study in a specialized institute. Their map consists of several multisensory tangibles that can be explored in a tactile way but can also be smelled or tasted, allowing users to interact with them using touch, taste, and smell together. A sliding gesture in the dedicated menu filters geographical information (e.g., cities, seas, etc.). Conductive tangibles with food and/or scents are used to follow an itinerary. Double tapping on an element of the map provides audio cues. Maps can be navigated in a tactile way, but consist of several different sensory types that can smell or taste, allowing users to interact with the system through three senses: tactile, taste, and smell. Multi-sensory interaction supports learning, inclusion, and collaboration, because it accommodates the diverse cognitive and perceptual needs of the blind. To analyze the data, the Grounded Theory [46] method was followed with open-coded interviews transcriptions and observations. One observation of children using a kinesthetic approach for learning and feedback from the teachers led to multi-sensory tangible artefacts to increase the number of possible use cases and improve inclusivity. Mapsense [19] consists of a touchscreen, a colored tactile map overlay, a loudspeaker, and conductive tangibles. These conductive tangibles are detected by the screen as the tangible's touch events. Users could navigate between "points of interest", "general directions", and "cities". Once one of this type of information is selected (e.g., cities for example), MapSense gives the city name through text-to-speech when it detects a double tap on a point of interest. Children could also choose "audio discovery", which triggered ludic sounds (e.g., the sound of a sword battles in the castle, of flowing waters where they were going to take a boat, of religious songs for the abbey, etc.). Finally, when users activate the guiding function, vocal indications ("left/right/top/bottom") help the users move the tangibles to their target. 3D printer PLA filament was used as material, and aluminum was added around tangibles, as it is conductive, and could be detected when it touches a point of interest in tactile map overlay [19].

Empathy is a communication skill by which one person can share another person's personal perceptions and experiences. A similar concept is rapport, which refers to understanding other people's feelings and situations and forming a consensus (or trust) with them. Empathy is an essential virtue in segmented modern society. Ambi, Figure 3, created by Daniel Cho, RISD, Province, RI, USA, 2015, is a nonverbal (visual, tactile, or sound) telepresence and communication tool used for promoting empathy and rapport between family members and couples. Sensors recognize intuitive and non-verbal (visual, tactile, or sound) signals and exchange empathetic emotions. Ambi's proximity sensor recognizes a person's presence and emotional state and communicates it to another person. One way to express affection is to wrap your hand around the Ambi's waist. In addition, through the non-visual sense of touch or sound, the visually impaired can share empathetic emotions with the others. The constant and immediate tactile feedback of another's presence and nudges allows the visually impaired to have more intimate connection and non-verbal communication with others that the video chat applications alone cannot provide.

**Figure 3.** A soft robot that conveys telepathy. Ambi (Courtesy of Daniel Cho, RISD, Province, RI, USA, 2015).
