Next Article in Journal / Special Issue
Tactual Articulatory Feedback on Gestural Input
Previous Article in Journal / Special Issue
Challenges and Opportunities of Force Feedback in Music
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design and Evaluation of a Multisensory Concert for Cochlear Implant Users

1
Multisensory Experience Lab, Aalborg University Copenhagen, 2450 Copenhagen, Denmark
2
Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA 94305, USA
3
Center for Hearing and Balance, Rigshospitalet, 2100 Copenhagen, Denmark
*
Author to whom correspondence should be addressed.
Current address: A.C. Meyers Vænge 15, 2450 Copenhagen, Denmark.
Arts 2023, 12(4), 149; https://doi.org/10.3390/arts12040149
Submission received: 3 January 2023 / Revised: 31 January 2023 / Accepted: 22 February 2023 / Published: 10 July 2023
(This article belongs to the Special Issue Feeling the Future—Haptic Audio)

Abstract

:
This article describes the design, implementation, and evaluation of vibrotactile concert furniture, aiming to improve the live music experience of people with hearing loss using hearing technology such as cochlear implants (CI). The system was the result of a series of participatory design sessions involving CI users with different hearing assistive setups (bi-implant, bimodal, and monoimplant), and it was evaluated in a concert scenario (drums, bass, and female vocals) at the Royal Danish Academy of Music. The project aimed to improve the music appreciation for CI users by providing a multisensory concert designed with CI challenges in mind, but not excluding normal-hearing individuals or individuals with other forms of hearing aids from participating in the event. The evaluation was based on (video-recorded) observations and postexperience semistructured interviews; the data were analyzed using event analysis and meaning condensation. The results indicate that tactile augmentation provides a pleasant experience for CI users. However, concertgoers with residual hearing reported being overwhelmed if the tactile stimulation amplitude exceeds a certain threshold. Furthermore, devices that highlight instrument segregation are preferred over ones that present a tactile mixdown of multiple auditory streams.

1. Introduction

Cochlear implants (CI) are neuroprosthetic devices that partially restore auditory sensations for people with severe to profound hearing loss. The implant directly stimulates the auditory nerve in the cochlea, completely bypassing the acoustic mechanisms of the ear (Wilson and Dorman 2008). The electrical stimulation is derived from the auditory input received by an external microphone array, usually located around the implanted ear. The journey of restoring hearing generally starts with the surgical implantation and continues with a long rehabilitation process that requires users to (re)learn how to respond to the new sense of hearing. Multiple factors may influence the final auditory performance of CI users, including device and surgical properties, duration of hearing loss or whether it occurred prelingually or postlingually, the quality of recovery from surgery, therapy and rehabilitation strategies, and many more (Holden et al. 2016).
After receiving a CI, individuals do not appreciate music any longer, as the device aims to restore speech capabilities, often disregarding musical percepts (McDermott 2004). Therefore, music appreciation and rendering fall short. Specifically, the majority of CI users report difficulties with sound localization or correctly identifying the timbre and pitch of musical instruments (Dorman et al. 2014, 2016). These limitations translate into a difficult listening experience of multi-instrument mixes, as a result of poor instrument separation (Galvin III et al. 2012). Nevertheless, the hardware and ergonomics of CI have advanced to a high level of sophistication, and there is clear evidence that training and rehabilitation schemes for CI users result in better musical perception and, as a byproduct, better speech performance. Unfortunately, these programs are few and far between and usually not available to the general public (Fuller et al. 2018).
Academics and the media have focused on music’s nonmusical cognitive and academic benefits. While some anecdotal benefits have been repeatedly disproven (e.g., the link between classical music and academic performance in children), several studies have shown the benefits of music listening (Schellenberg 2016; Schäfer et al. 2013; Črnčec et al. 2006). In intensive care units, music helps relieve tension, distract from pain, and promote spatial awareness (Aydın and Yıldız 2012; Coffman 2002; Mitchell et al. 2007). Other research on the psychological, emotional, and social benefits of music listening suggests that it gives a platform for multifaceted self-related thoughts on feeling and sentiments, escape, coping, consolation, and purpose of life (Schäfer et al. 2013). When it comes to CI users, there is evidence that they benefit from music listening in similar ways as normal-hearing individuals. However, the challenges that CI users experience in perceiving and appreciating music limit them from participating in musical activities as often as they would like, and this has a severe effect on their physical, psychological, and social health (Dritsakis et al. 2017).
In this article, we extend our efforts to improve the music listening experience of CI users by using vibrotactile devices in a concert scenario, initially described in (Cavdir et al. 2022). We start with a short introduction to the underlying principles and related work, and continue with a presentation of a participatory design workshop aiming to integrate multisensory feedback into listening experiences. Our observations from this workshop resulted in design guidelines for vibrotactile concert furniture that supports the perception of specific musical features and elements. We organized a concert for CI users to evaluate these devices, and the whole process is described in Section 3.2. Lastly, we summarize our findings and present a discussion and conclusion about our attempt to improve the musical appreciation of concert performance through vibrotactile augmentation.

2. Related Work

2.1. Accessible and Inclusive Design in Music

Researchers propose several approaches to the design and collaboration/participation process while creating accessible digital musical instruments (ADMIs) or accessible music technology (AMTs), ranging from participatory approaches to performance and improvisation.
Schroeder and Lucas (Lucas et al. 2020) discuss the custom design process and evaluation for accessible music technologies. The researchers analyze the importance of bespoke designs in enabling impaired musicians to create music. According to Lucas et al. investigation’s of the criteria for judging custom musical instrument designs, these designs should be evaluated for the next ADMI designs (Lucas et al. 2019). (Samuels and Schroeder 2019) focus on the performance element in accessible and inclusive music design and investigate improvisation possibilities among performers of all backgrounds and abilities for improved inclusion.
Dickens and colleagues (Dickens et al. 2018) use participatory methods to study how people with complex disabilities interact with music in real life and to look into the possibilities of embodied interactions with gesture-based technology. (Marti and Recupero 2019) have proposed a participatory method that focuses on designing smart jewels that are more than just functional for deaf and hard of hearing (D/HoH) people who wear hearing aids. Other researchers interested in accessibility examine comparable participatory practices and what they mean for rehabilitation (Quintero 2020). However, their use in designing musical experiences, especially for people who are deaf or hard of hearing, is very limited. Like participatory design with D/HoH, research that involves the community and focuses on music and hearing problems is even less common in this field. (Gosine et al. 2017) talk about how important it is to build community through inclusive music-making and how it can help disabled people through music therapy. Using a workshop format, they made it possible for people with physical disabilities and local community musicians to work together.
Frid (2019) points out that most ADMIs are more concerned with meeting the complex needs of users with physical and cognitive disabilities than with how people with vision and hearing problems experience music. By 2019, only 6% of ADMIs were focused on hearing problems, and even fewer looked at how CIs are used with music.

2.2. CI Music

People with hearing impairments have vastly different hearing profiles, abilities, and perceptions. This is due to factors including age, cognitive processing residual hearing, hearing aid use, and musical training (Gfeller et al. 2019); this variability is considerably greater among cochlear implant (CI) users.
In the past 30 years, CIs have seen a remarkable transformation, giving more than 500,000 profoundly deaf partially restored hearing (McRackan et al. 2019). In order to gauge the effects of the implants, speech recognition tests are predominately used, at least in the development stages. TheNijmegen Cochlear Implant Questionnaire (Hinderink et al. 2000) is frequently used on a global scale to assess the benefits of cochlear implant surgery, but only 3 out of 60 items are related to music. The Cochlear Implant Quality of Life (McRackan et al. 2019) survey tries to address this disparity by dedicating one category out of five to auditory entertainment evaluation. This does not mean that surveys focusing on music have not been developed, as the Music-Related Quality of Life Measure has been created to guide music rehabilitation for CI users (Dritsakis et al. 2017). This measure has first been recommended to be used in conjunction with the Nijmegen one by (Frosolini et al. 2022). Nevertheless, when evaluated on monosyllabic words, common implant systems obtain 50–60% accuracy after 24 months of usage and nearly 100% on phrases (Wilson and Dorman 2008). Some individuals have astonishingly good outcomes, demonstrating what may be done with a neuroimplant in a cochlea that is otherwise completely mulfunctioning. The results are improving, especially in individuals utilizing bilateral implants, although variability is still substantial, with standard deviations for different trials ranging from roughly 10% to 30% (Wilson and Dorman 2008).
Unfortunately, the pitch and timbre perception offered by the current generation of CIs is drastically compromised. Another notable drawback is that users have trouble distinguishing between competing sounds when numerous instruments are playing or when there are extended reverberations present (Certo et al. 2015; Fletcher et al. 2020). Furthermore, even when signals are below the CI pitch saturation limit (300 Hz), there is a weak representation of the fundamental frequencies (F0) for complex sounds, with difference limens ten times lower than normal hearing. Lastly, the dynamic range available for electrical stimulation in CI users is only about an eighth of what is available for listeners with normal hearing (Zeng and Galvin 1999; Zeng et al. 2002), further compromising musical listening abilities, and by extension, musical appreciation. Since the general musical experience of CI users is subpar, these cumulative circumstances prevent the evaluation of music experience from being used as a gauge of the success of the implants.

2.3. Multisensory Integration

The foundation of the research presented in this paper is the multisensory integration principle, which describes how people combine information from several senses to create coherent experiences (Stein and Meredith 1993). The prerequisite for this integration to be most prominent is that the stimuli overlap in time; this will result in a perceptual enhancement that is strongest for the stimuli that are least effective (Stein and Meredith 1993).
Recent research shows that multisensory integration can in fact occur at very early stages of cognition, leading to supra-additive integration of touch and hearing in the specific scenario of auditory–tactile stimuli (Fontana et al. 2016; Foxe et al. 2002; Kayser et al. 2005). This is particularly helpful for CI users who have been found to be stronger multimodal integrators, as documented by (Rouger et al. 2007). Additionally, research on auditory–tactile interactions has demonstrated that the presentation of a tactile signal can alter cross-modal perception and vice-versa (Jousmäki and Hari 1998; Nordahl et al. 2012). A popular example demonstrating the possibilities is listening to loud concerts that feel powerful partially because of the vibrations produced by the massive speakers, but listening equally loudly through headphones does not feel as exciting.

2.4. Vibrotactile Augmentation of Music

Previous research on the tactile augmentation of music has made substantial use of multisensory integration; in 2009, Karam et al. increased the audio-tactile resolution of the skin, drawing inspiration from earlier sensory substitution vocoders (Karam et al. 2009). In accordance with the cochlea metaphor, which describes that lower frequencies are reproduced through lower body areas than higher frequency ones, their project produced a chair with four pairs of voice coil actuators set in an array along the backrest. According to them, each actuator was capable of reproducing an octave of the piano, from 27.5 Hz to 4186 Hz. They assessed their design in terms of emotional response and concluded that participants preferred audio-tactile stimulation strategies above the auditory signal alone (Karam et al. 2009). A wide range of opinions, largely favorable, were expressed in response to further chair improvements (Baijal et al. 2015).
With the assistance of the hearing-impaired community, Nanayakkara et al. created another chair installation (Nanayakkara et al. 2009). Their haptic chair was designed to provide whole-body stimulation at first, with two contact speakers acting as haptic transducers placed beneath the armrest. For further iterations, actuators aimed at the lower back were added, alongside a vibrotactile footrest (Nanayakkara et al. 2012). The tactile stimulation was always presented in conjunction with sound, reproducing an amplified version of the auditory input. Their chair has been successfully employed in longitudinal research (12–24 weeks) to improve music listening and speech therapy for deaf children, underscoring the need for training when users are expected to adjust to a novel haptic system (Nanayakkara et al. 2012).
A 2015 collaboration between Queen Mary University and the Deaf arts charity group Incloodu led to the creation of an installation in the form of a sofa and armchair (Jack et al. 2015). The devices utilized a subPac1 under the seat and voice coil actuators mounted in the backrests and armrests. The armrests recreated a noisy component associated with timber, while the spatial auditory information was distributed from low to high frequencies corresponding to sections of the backrest, similar to the cochlea metaphor described in (Karam et al. 2009). A severely deaf architect who specialized in creating accessible furniture was employed to design the furniture. Their analysis demonstrates that the musical style has a significant influence on the experience, with highly rhythmic music evoking more favorable reactions than music where harmonic motion was most essential (Jack et al. 2015).

3. Materials and Methods

This section describes and discusses the participatory design process, as well as the multisensory concert and its evaluation.

3.1. Multisensory Integration Design Workshop

By introducing several multisensory installations, this study aimed to (1) involve CI users in the early stages of building novel audio-tactile displays and (2) analyze the constraints of the configurations that were shown. We conducted an exploratory study, gathering data using a triangulation of techniques, including observations, pre- and postworkshop interviews, and the think-aloud protocol (Someren et al. 1994).

3.1.1. Workshop Format

Each meeting had a set agenda and lasted between 60 and 120 min; one of the authors was present the entire time, taking notes and recording the conversations. Before the meeting, attendees were asked to complete an online survey that asked about their demographics and their past and present musical preferences. Before any installations were installed, a semistructured interview was held with the goal of learning more about people’s musical engagement patterns, and the survey results served as the basis for that interview. The participants were then led to investigate and experiment with the various installations mentioned in Section 3.1.3. The session ended with a brief exit interview that summarized the participants’ feedback. The attendees were encouraged to think aloud and were in constant communication with one or more of the authors during the entire conference.

3.1.2. Participants

Three people voluntarily took part in the study after being contacted by email or an open invitation posted to the Facebook group for the Danish Cochlear Implant users2. Participant 1 (P1) is a 52-year-old female who started losing her hearing at the age of 3, currently with no residual hearing. In 2017, she got bi-implanted with Kanso CI, experiencing a positive transition from hearing aids to cochlear implants. She likes Fleetwood Mac, Dolly Parton, and The Beatles, but dislikes techno, classical music, and heavy metal. She has a background in piano and dancing (in African and Danish dances). She sings in a choir but is challenged in distinguishing and synchronizing with accompaniment, misidentifying when to start singing. She reported using a water bottle or glass in her hands to feel the vibrations in the few occasions she attends concerts.
Participant 2 (P2) is a 69-year-old male with a genetic hearing disability who uses a cochlear implant in his left ear and a hearing aid in his right ear. He has experience from a musical family, singing in a church choir, and performing competitive dancing. He likes opera, waltzes, church, and classical music, but dislikes rock. More recently, he rarely listens to music. When listening to familiar music, he expresses: “[…] my memory was another […] I have this sort of feeling of something is in another way.
Participant 3 (P3) is a 41-year-old male. He uses a Nucleus Cochlear implant in the right ear and is near deaf in the left ear, with a hearing threshold of +95 dB. He has been using hearing aids since the age of 3, frequently upgrading them to higher amplification ones. When listening, he can identify when music is playing, the sex of the singer, and the instrument if the music is performed live on stage. He regularly attends festivals, mostly for social reasons. Lately, he enjoys listening to music for short periods of time (5 min), because after about 10 min it becomes exhausting. He mostly likes rock, especially the Danish band Dizzy Mizz Lizzy.

3.1.3. Workshop Experiences

Three different setups were presented to the participants, each focusing on unique approaches to music listening and augmentation.

Installation 1

Since CI users find it extremely difficult to distinguish specific musical instruments in a mixdown (Oxenham 2008), we set up a multichannel listening environment. The installation used a four-channel speaker configuration to reproduce multichannel recordings in order to test CI users’ instrument segregation processes; a diagram of the setup can be seen in Figure 1. To compare and contrast the single and multi-instrument mixings, we invited listeners to freely walk about the space and hear various sound sources.
To avoid having the loudspeaker directionality of the experiment be altered or diminished by room reverberation, the experiment was carried out in an anechoic room on the campus of Aalborg University, Copenhagen. Four Dynaudio BM5 MKIII loudspeakers were used in the configuration, which also included a Steinberg UR44C audio interface connecting the laptop to the speakers. Only one instrument per channel was used as a signal for each loudspeaker. Drums, bass, vocals, alternating piano or guitar, and multitrack recordings were reproduced using the Digital Audio Workstation Reaper to route the instruments to separate speakers. For all three sessions with the participants, no routing changes were applied to maintain consistency between the experiences. The amplitude level of each channel was set by the authors to obtain a balanced mix that allowed a normal-hearing person to perceive all the instruments with equal loudness in the center of the room. The single recordings were played without any effect, such as reverberation or compression, to avoid any possible confusion for the listener.
Upon entering the room, a brief explanation of the experience was given before allowing the participants to select their favorite song from three well-known rock, soul, and reggae tunes. Then, after reaching an agreement on the appropriate loudness level with the user, we set all channels to a discussion level throughout the whole test.
For the first portion of the experiment, the subject was instructed to attempt to identify the instruments being played and the speakers they were coming from while standing in the middle of the room. After making their choices, we requested the user to circle the room, moving in close proximity to each loudspeaker, to confirm or retract his/her statement regarding the types of instruments played and the locations. In the second and final section, we let the subject choose a location in the space where the music sounded the best to them. The participant was free to comment or explain at any time during the entire experiment what they were thinking and how they felt about the listening situation and were encouraged to do so with guiding questions from the authors.

Installation 2

In order to improve the music-listening experience for CI users, we started a design process to investigate whether and how audio-tactile feedback might be included in a sitting installation. We concentrated on delivering low-frequency augmentation due to the poor auditory resolution in the range that CI users experience (Başkent and Shannon 2005).
With three CI users and three normal-hearing participants, we tested two prototypes that included three parts: a seat, a footrest, and a hand-held gadget. These components could be used both separately and simultaneously. Through a headphone splitter that fed the same signal to the amplifier for each transducer, all users had access to the gain control for each actuator. Only the first user made the decision to change the gain balance herself; the other two gave the researchers verbal directions. For the first user, the music was played over a pair of B&W 800D speakers, and for the second and third users, a pair of Mackie SRM450 + SRM1550 speakers. Users had access to the master volume knob, which coupled the tactile and auditory volumes, as well as the signal feeding the headphone amplifier, which served as a multichannel signal splitter in this case.
Three different sitting installation types offered three distinct experiences. The first installation was a tactile vehicle seat that was activated by a Buttkicker LFE3 and powered initially by a Buttkicker BKA1000 and then by a StageLine ST600 in bridge mode, since we discovered that higher frequencies were limited by the BKA1000 amplifier. The actuator was placed behind the car seat, which was fastened to a wooden EUR-pallet platform using bolts, as shown in Figure 2. The headrest, as well as the backrest, received adequate tactile feedback from the actuator, even though it was not in direct contact with it. Participants P1 and P3 noted that high tactile amplitude might easily become overwhelming in this setup.
Based on P1 and P3, which assessed the first design as potentially overwhelming, the second and third types of sitting experiences changed from a low seating position to a more upright one through a bar stool instead of the car seat. We chose the bar stool design because it gives users a choice over how much body weight they apply, which is coupled with the tactile sensation received. The bar stool was activated by a Buttkicker Advanced driven by a Buttkicker BKA300. This arrangement used less power because the actuator was significantly smaller than the one from the vehicle seat but still provided enough tactile stimulation. The frequency responses of the two setups were significantly different, with a high-frequency focus for the arrangement with Buttkicker Advanced, as noted by the authors.
Different configurations were present for P2 and P3, with P2’s actuator bolted perpendicular to the seating area and P3’s actuator fixed parallel to the ground on the side of the seating. In response to P2’s observation about the bar stool’s lack of intensity (possibly in comparison with the car seat), the side actuator arrangement sought to conduct more tactile stimulation. We saw an unusual behavior in P3’s setup, where strong transients caused the bar stool to shake laterally, maybe as a result of the loose joints, and it seemed like a slight kick in the back of the chair.
The footrest has a 22° inclined plane that was created in accordance with H. Dreyfuss’ measuring standards and guidelines (Tilley et al. 1993). The footrest’s dimensions of 45 cm in length and 60 cm in width were necessary to allow room for the actuator underneath the inclined plane and to accommodate two adult feet on it. The same Buttkicker Advanced + BKA300 combination as for the bar stool was employed. In accordance with the seat setup shown in Figure 2, the transducer was bolted beneath the footrest. The setting was the same for each participant.
There were two handheld devices. By stacking 51 laser-cut slices of 4 mm HDF, a cylindrical handheld grip measuring 2040 mm in length and 110 mm in diameter was created. These measurements fall within the guidelines for design provided by H. Dryeyfuss (Tilley et al. 1993), which are between sizes 3 and 4 on a tennis racket, representing the middle sizes for adults. As shown in Figure 2, this grip was fastened to a Brüel & Kjaer Type 4809 portable vibration exciter. The second interface, called VAM (Vibrotactile Actuator for Music), was created around the Tactuator BM1C4, an ovoid device with dimensions of 84 mm in width, 58 mm in height, and 89 mm in depth, described thoroughly in (Paisa et al. 2022). P1 tested the cylindrical grip and the VAM in conjunction with the seat and footrest on their own, but P2 and P3 abandoned the VAM because they felt it “didn’t add much.
When it comes to music, the initial auditory stimulus was Peggy Lee’s Fever, due to its distinct instrumentation and the importance of the female vocal track, matching the preferences of CI users as presented in (Buyens et al. 2014). This opening track was played in each configuration of Installation 2. Firstly, two distinct signals were transmitted in succession to the handheld grips; the first was congruent to the one offered through other actuators, but the second was band-pass-filtered to isolate the female vocal range and pitched one octave lower to correspond to the skin’s sensitivity range (Jones and Sarter 2008; Wilska 1954). This only applied to P1’s experiment, as she claimed that the pitch-shifting process decouples the voice from the tactile sensation. P2 and P3 were exposed to a solo bass improvisation on the double bass or ukulele bass of the same song. The performance featured a variety of playing techniques (pizzicato, slapping, staccato, etc.), making use of the entire instrument’s range and using bar stools and vehicle seats as props. P3’s experience featured a section with live drums as well.
Participants chose additional audio content for their ideal arrangement. All three of them favored the configuration shown in Figure 2. P1 listened to Fleetwood Mac—Dreams, P2 Vienna Philharmonic—An der schönen, blauen Donau (excerpts), and P3 Dizzy Mizz Lizzy—Silverflame.

Installation 3

With the aid of movement-based performance and in-air haptics, we also looked at the participants’ experiences with embodied interactions. We reviewed passages from the inclusive performance research Felt Sound (Cavdir and Wang 2020), which was created for both hearing and D/HoH audience members. Initially, Felt Sound was performed live with an 8-subwoofer speaker configuration while the participants sat near to or touched the speakers. It consisted of a digital musical instrument and a performance environment. Due to constraints on time and space, as well as COVID-19 safety measures, performance snippets were presented to each participant separately, with two subwoofers positioned such that they were facing them and surrounding their seating area. Participants were still urged to engage with the speakers and use their hands to physically feel the vibrations.
We gave a succinct overview of the inspiration, idea, and performance style of Felt Sound. We explained its context to the participants before presenting its extracts. After the performance, we talked about their interactions with Felt Sound, as well as their own musical and dance routines. Participants shared their own associations and experiences with movement practice and music as a result of the presentation of a novel movement-based musical concept.

3.1.4. Experiences and Results

Each participant’s conversation with us was audio-recorded, and following the study, we transcribed them. An overview of these discussions, emphasizing their enjoyment of the installations and their overall experiences, is provided in this chapter.

First Participant

As described in the previous section, P1 experienced four vibrotactile displays, including a car seat, footrest, hand grip, and the VAM in two situations (processed and unprocessed signals). We initially adjusted the listening volume to “comfortably loud”, just above conversation level, and the separate tactile amplitudes to “perceptually equal”; the tactile amplitude was coupled to the audio loudness through the “master out volume” knob on the sound card.
In the first instance (listening to the processed music), she stated that it was “fun to feel the vibrations throughout the entire body”, reiterating her experience with the water bottle during concerts (see Section 3.1.2). She did not comprehend how the vocals were mapped to the haptic feedback because she could already hear the voice through the speakers and did not require additional stimulus representing the voices. Moreover, she barely raised the volume of the hand grip a few times.
She appeared more engrossed in the song and moved to the music when she was given the second case (listening to the raw audio). Similar to the first instance, she tried slightly turning up the seat, footrest, and hand grip. When the music was finished, she said she favored this listening method over the first case, because she could feel the melody in the footrest. She added that it could occasionally be overpowering to listen to the vibrations through the chair in this configuration.
Over the course of the experiment, her perspective evolved. She said it was enjoyable and that she could feel the voices through the hand grip and the bass line—which she initially thought was coming from a keyboard—through the foot pad and the seat. The same signal was replicated by all actuators, but different haptic sensations were felt at various points on the body, amplifying the notion of pitch and instrument type. In response to the question of whether she would use a textit-type gadget at a concert, she said, “I would want to have some help from vibrations”, and she went on to explain how she usually sits quite near to the speaker at live shows in order to receive the haptic feedback. Additionally, she claimed that if “it is trustworthy” she would utilize them. In comparison with the other haptic listening instruments, she placed less emphasis on her experience using the VAM, claiming that it was insufficient. We took her comments concerning the VAM to mean that it was “not strong, comparing to other actuators”.
She recounted her experience with music and movement after hearing Installation 3. She was inspired by this installation to describe her physical routine and more embodied musical experiences, including singing. She stated that she finds it challenging to recognize the beginnings when she sings in a choir, specifically being able to tell when to begin singing solely by listening to the piano. She expressed interest in using gestures during her vocal practice to help her and reinforce her conductor’s support for her. She also mentioned how watching a gesture-based performance helped her to understand and appreciate music.

Second Participant

P2 experienced Installation 1 with the song Ain’t No Mountain High Enough by Marvin Gaye and Tammi Terrell. He properly recognized the left and right channels and the instrument they played while listening from the center of the room. However, he misidentified the instruments at each channel. When we invited him to get closer to each speaker, he was able to correctly identify all the instruments, including the alternating male and female voice, but was unable to tell the words apart. He recognized that the guitar and keyboards were playing simultaneously on the same channel, just like the voices. He said that “it’s always about guessing,” which showed how uncertain he was of his responses. He always used his hearing aid mostly and pointed his nonimplanted ear in the direction of the speakers.
Finally, we let him freely choose a location where the music suited him the most. He positioned himself closer to the 1–2 speakers in the center of the 1–2 3–4 speaker pair, saying, “I think this must be the perfect (place) for this kind of music because it is feasible to hear all of it.” He claimed that after being introduced to each instrument independently, they all started to make more sense to him. Similar to that, after being told what the chorus lyrics were, he could follow them while identifying the words—confirming the pop-up effect.
With the same volume controls as those first established for P1, the second installation included the car seat, the bar stool (with vertically mounted actuator), the footrest, and the hand grip designed around the B&K actuator. A bar stool with a footrest and a hand grip was one part of the setup, and the vehicle seat with a footrest and a hand grip was another. Without using any processing for the actuators, we played the same music as for P1. We interrupted the listening for intermediate conversation after about 90 s of listening through the initial setup, and the subject stated where he felt the vibrations most strongly: in the thigh, ankles, and up to the elbow. Although he was limited in his comments regarding the perceptual quality of the stimuli, he was verbose about the locations and intensities of the reported vibrations. He recognized the deep bass and the female voice. Lastly, he added that he could recognize the melody with ease.
When we asked him how the music made him feel, he replied: “It was more like a little bit sad music.” He then went on to say that the music needed to be happier for him to like it. Additionally, P2 acknowledged that he dislikes the bass as an instrument but loved the live music part when one of the researchers played a double bass solo. He added that the installation was more engaging but that the music selection had an impact on his experience, because it was not in his preferred musical genre, enhancing something he did not like. He asked to hear J. Strauss’ composition “An der schönen, blauen Donau.” The participant commented, “Yeah, this is much better, much better, yeah, and I can feel it (the vibrotactile system) supports the song,” after the very first chord and continued […] if you enjoy the music, this gives extra power. The second configuration was limited to playing waltz music; however, the conversation shifted to the economic worth of musical experiences and no comments on the second setup were made. He continued to describe himself as fairly conservative, and he would prefer a traditional chair unless specifically encouraged to try one in a concert hall, but otherwise, he would not utilize such a system (setup 2) in a concert context.
Regarding Installation 3, he had a different reaction compared with P1; the music’s low frequency content did not appeal to him as much. Although he said that he could physically feel the vibrations, this style of listening did not improve his enjoyment of the music, but he concluded by saying that the music’s gestural performance element worked well.

Third Participant

In Installation 1, P3 chose to hear “Don’t stop me now” by “Queen”. He recognized the voice by positioning himself in the center of the room and noted that the speaker playing the bass line had a much lower volume than the others. He soon recognized the voice after moving closer to each speaker, but he mistook the piano for the guitar. He had trouble recognizing the instrument from the speaker playing the bass line and inquired as to whether it was a tuba. He identified the drums accurately.
When we asked him to pick a favorite spot in the room, he spent several minutes walking between the speakers and the general listening area. The location he chose was far from speakers 1 and 2 but much closer to speakers 3 and 4, which he was facing—these speakers were playing voice and guitar, respectively, as seen in Figure 1. He mentioned that he could hear “a little bit of everything” at this point, focusing solely on the vocals, bass, and drums. We saw that he preferred listening to the instruments on their own during the postexperiment talks, since he could interpret them in his own way. Discussing the bass sound at concerts further, he said, “[he] could never discern [the individual instruments] since everything sounds like ‘mush”’, but in this case, after hearing each instrument independently, “[…] it was a bit simpler”.
The only change between P2 and P3 regarding the second installation was how the actuator was positioned on the bar stool, as explained in Section 3.1.3. Once the music had stopped after about 90 s (before the second verse), the participant quickly remarked that the hand grip and the bar stool did not offer much to the experience. When pressed, he was unable to specify the song’s valence. Following this, all actuators were muted before the music started again, and we gradually increased each one’s amplitude, while instructing the participant to place their attention on preferred locations rather than activated ones; the outcomes were the same: he preferred the hand grip and the footrest (especially when it was turned up more). He claimed that while on one side of the song it is “slow and heavy”, the singing (voice) sounds cheerful, thus it is impossible to pin down the song’s atmosphere.
All actuators were set to their initial amplitudes for the live ukulele bass performance. P3 provided comments, stating that he favored the footrest for lower frequencies but preferred the hand grip for higher frequencies. Additionally, he noted that it was simpler to “feel what happens” through the hand grip when short, quick notes were played. Similar to the first inquiry, the bar stool “did not have anything to give”.
Moving on to the setup with the actuated car chair, the participants remarked that setup 1 felt a little distant by comparison and that “[…] this is much better to have it in the back, this way”. The volume of the actuators was altered during this experience, and we came to the conclusion that it works best when all three actuators are perceivable and that it seems “empty” when the car seat is not being actuated. After the live bass performance (same as for setup 1), P3 said that while utilizing the setup is enjoyable, he still feels like he is “missing something” and “just misses genuinely being able to enjoy music”, which is a truth that was unaffected by the setup that was demonstrated. However, similar to setup 1, he could “feel” the voice more through the hand grip.
When asked which he liked more, the recorded or the live performance, he responded that the latter is better since it had more instruments and “more varied sounds.” As a result, two of the researchers spontaneously performed a drum and bass duo set while grooving out to the bass line from Fever. The participant mentioned that before experiencing this, he did not know how to tell the difference between the bass sound and the drum sound (in live performances), but through this multisensory experience he could identify both.
His encounter with Installation 3 was consistent with P1’s observations regarding the gestural performance. He claimed that he had never witnessed a musical performance in which the audience participated by moving their bodies to the music.

3.1.5. Workshop Discussion

Although CI users may share common difficulties in experiencing music, such as pitch identification, source localization, and instrument segregation (auditory streaming), their priorities in addressing these obstacles vary greatly from person to person. P1 was able to hear the nuances of pitch variations in singing, but due to her musical training, she prioritizes practicing onset detection and phrasing to support her choir singing. Similarly, P2 preferred limiting his experience to the musical genres he enjoys and enhancing these specific genres over practicing to fill the gaps in his musical perception. Not only should researchers and designers consider individual differences in hearing profiles but also in musical appreciation, engagement, and preferences. Age, the use of hearing aids, and musical training, among many others, have a substantial impact on these design considerations when working with CI users. Therefore, due to the variation in CI users’ perception, experience, and comprehension of everyday sounds, speech, and music, on top of the ones mentioned above, we believe that researchers designing multisensory experiences should focus on personalization and customization as core design requirements.
Similarly, for many CI users, the experience of music is novel and requires ongoing practice and education. A continuous musical activity in which users can practice becomes essential for those who struggle to comprehend music. Our assessment methodology reflects this process of investigating and comprehending CI users’ hearing and engaging with technology in ways that support both their hearing development and appreciation of music. Their participation in generating ideas and directing design directions was essential to the research procedure.
We routinely consulted the most recent research on evaluating CI hearing and used it to guide our experience design research, because the references CI users use for music are more subjective when they describe their perception and experience of music. We feel that a more comprehensive approach to facilitating the music engagement of CI users allows more embodied listening and music-making practices. Rather than separating design, assessment, and evaluation processes, developing new musical interaction experiences leads to an integrated and participatory research approach. In addition, we discovered that this process-oriented evaluation enables designers to find more options for engagement with CI users, as participant recruitment in the CI community remains one of the greatest obstacles. We believe that by establishing a more formal organization around cochlear implant use and music, their engagement in design and research can be encouraged, thereby enriching their musical experiences.
It is advantageous for designers of tactile displays for CI users to create devices that are adaptable and can accommodate varying musical preferences, hearing ability, and musical engagement levels. Although the small size of our sample prevents us from generalizing the experience of CI users in the larger community, the vastly varying needs of each participant emphasize the necessity for design flexibility and adaptability. In addition, great care should be made to avoid creating unpleasant experiences, as was temporarily the case for P1 (intense tactile stimulation) and P2 (unpleasant music choice). Preparation can be aided by prior knowledge of the target audience, but a crucial step towards this prestudy is ensuring that displays have basic controls for tactile and aural stimulus levels and, in the case of multiactuator devices, that each transducer has independent control. Consider flexible or modular hardware that is easily reconfigurable based on the demands of the user. Participatory design practices allow researchers to investigate individual needs. Lastly, we recommend, whenever possible, the incorporation of visual feedback in the form of gestural or movement-based performance or visualization, which can supplement gaps in perception from the tactile or auditory channel. Listening to music must be viewed as a multifaceted experience that might be difficult and laborious for the hearing impaired, and great care should be taken to insure the comfort of the listener.

3.2. Concert

This section describes the design, implementation, and initial evaluation of vibrotactile concert furniture, aiming to improve the live music experience of cochlear implant (CI) users. The system created was a direct result of the workshop described in Section 3.1 and was evaluated in a concert scenario (drums, bass, and female vocals) at the Royal Danish Academy of Music. The project aimed to create a better live music experience for CI users by providing a multisensory concert designed with CI limitations in mind but not excluding normal-hearing individuals from participating in the event.

3.2.1. Tactile Displays

For the concert, the hand grip was used, and a new tactile display was designed following the guidelines presented in Section 3.1.5, which were distilled into the following design objectives:
  • Enhance the concert experience by providing congruent vibrotactile feedback.
  • Afford multiple interaction modes and postures to accommodate the variate needs of CI users.
  • Present as furniture rather than medical apparatus.
  • Encourage a social experience.
  • Usable by CI users and normal-hearing participants alike.

Vibrotactile Furniture

The main tactile display was derived from the bar stool/car chair and footrest setups presented in the workshop Section 3.1 but with several bespoke elements. Firstly, in order to accommodate individual vibrotactile preferences, we designed a double-slanted system formed by a leaning bench and a footrest, as shown in Figure 3. This configuration allows users to adjust the amount of stimulation they desire by altering their weight distribution, affording control analog to tactile mixing between the feet, hands, and buttocks. Another intention of the double-slanted design was to discourage users from resting in a fixed position, in order to avoid overwhelming experiences expressed when sitting in the car chair, as discussed in Section 3.1.5. Neither the leaning bench nor the footrest is ergonomically viable unless used together, forcing the user to find an individual balance between the two, both in terms of comfort and vibrotactile preference.
The height and angle of the bench were designed with a single goal in mind: to ensure a comfortable posture for individuals with different statures. The oversized footrest plays an especially important role, as both short and tall individuals can find a position on it that matches users’ individual preferences.
Two benches measuring 120 × 30 × 75 cm and four footrests of 38 × 73 × 18 cm were built, using common woodworking materials: beams of 38 × 76 cm for the bench legs, the beam connecting the footrests, and beams of 38 × 38 cm or 14 × 76 cm for structural reinforcements. The top of the benches were cut out of 17 mm high-density fiber board and were covered with a thin carpet-like material in order to avoid slipping. The footrests were cut out of 15 mm-thick pine wood, and each one was engraved with Figure 3 in order to encourage exploration of different postures. Figure 4 shows the leaning benches and footrests as they were used at the concert. All contact points with the ground had rubber feet of 2 × 2 × 2 cm in order to decouple the system from the floor and to avoid slipping—a behavior noticed especially in the case of the footrests. Each of the benches was actuated by a ButtKicker Mini Concert bolted in the middle, underneath the sitting area, while the footrests had a single ButtKicker LFE attached; this meant that the outer ones produced slightly less vibrotactile stimulation.

Audio and Tactile Signals

Similar to the setup used in the workshop, the tactile signals were split between the leaning benches and the footrests. The benches were reproducing the signal captured by the vocalist’s microphone, and the footrest played the signal from the double bass, as seen in Figure 5. The hand grip mixed both the bass and vocal signals. The drum sounds were conveyed only acoustically, as it was assumed that the timekeeping and rhythm aspects of drum-playing would be sufficiently received through this channel alone (McDermott 2004). The levels for the tactile displays were calibrated during sound check, based on the discussions from the workshop in Section 3.1, with help from a professional Tonmeister, with the aim to provide a comfortable balance between acoustic and vibrotactile stimulation.
The main goal for the signal processing was to present stimulation when the vocal/bass signal was present acoustically, as well as to prevent the tactile stimulation from becoming overwhelming; this required a substantial amount of compression/limiting and equalization, all carried out inside the Yamaha 01V96i digital mixer. The settings for the effects were preset during one of the rehearsals, with minor adjustments during the soundcheck.

3.2.2. Concert Setup

The concert was organized in collaboration with the Tonmeister department from The Royal Danish Academy and the Center for Hering and Balance from Rigshospitalet—Copenhagen, as the main act of a workshop showcasing assistive hearing technologies called CoolHear Workshop5. Participants were recruited from the entire spectrum of hearing characteristics, as well as professional personnel working with CI and hearing-impaired individuals such as audiologists, logopedists, technicians, and artists. The event was free, was promoted locally through social networks and individual invitations, and gathered more than 50 attendees, of which at least 6 were CI users. The main act took place in Studio 3 of The Royal Danish Academy. During the concert introduction, the concept was described to the audience, and the multisensory setup was introduced. The tilted benches were placed among the regular seats in order to avoid the feeling of exclusion; Figure 6 shows the tilted benches in use during the concert. Participants were asked to prioritize individuals with CIs and suggested that they take turns interacting with the vibrotactile furniture.

Music

Three professional musicians were employed to perform on the stage, playing drums, double bass (3/4 size), and female vocals (mezzo-soprano). The entire band was part of the concert design from the beginning, and two of them are among the co-authors who participated in the design workshop as well. This early relationship resulted in great communication regarding the needs and challenges manifested by CI users, and it was especially beneficial that the same bass player and drummer were employed who were present throughout the participatory design workshop described in Section 3.1.
The setlist for the concert was collectively decided upon based on factors that surfaced throughout the first workshop Section 3.1 and some requirements imposed by the multisensory hardware setup, as well as the instruments available:
  • The music should be popular in order to increase chances of cognoscibility;
  • The music should include lyrics and should be focused on the vocal line;
  • The music should revolve around simple, repetitive riffs;
  • The melody should be easily represented by a single instrument;
  • The music should be easily transposable to the vocalist’s lowest register;
  • The setlist should contain songs from multiple genres.
Based on the items above, several songs were trialed during rehearsal sessions, and the setlist consisted of two sets of two songs each. The decision to split the concert into two sections was taken in order to avoid auditory fatigue, usually associated with the CI music listening experience. The final setlist was based around the following songs, adapted for bass, drums, and vocals:
  • Peggy Lee—Fever;
  • The Civil Wars—Billie Jean;
  • Louis Armstrong—What a Wonderful World;
  • Postmodern Jukebox—Seven Nation Army;
  • J. Knight, D. Farrant, and H. Sanderson—Don’t Give Up On Love (improvisation);
  • Ruth Brown—5-10-15 Hours (improvisation);
  • Etta James—Something’s got a hold on me (backup song—not used).
During a casual conversation in the break in between the two sections of the concert, some of the CI users that tried the multisensory furniture expressed their appreciation for the experience and requested more songs played in the second part. The musicians agreed to play an encore after receiving a standing ovation at the end of the second set, resulting in an impromptu adaptation of the last two songs on the list presented above. The interpretation for these songs was more improvised, promoting solo passages for drums and bass, respectively. The cumulated play time added up to a little over 25 min.

3.3. Concert Experience Evaluation

The goal for the evaluation was to understand the perceived quality of the multisensory concert experience and to assess the usability of the vibrotactile systems. Throughout the concert, more than 20 participants tried multisensory furniture, and a similar number of people experienced the hand grip. Of these, five participants were cochlear implanted, and only those are included in the results.

3.3.1. Methodology

The methodology implied a triangulation of qualitative methods: video-based observations, postexperience semistructured interviews, and a later-written structured interview (Bjorner 2016). Due to the challenges presented by CIs when it comes to speech in noisy environments, the semistructured interview transformed into a focus group, where one participant relayed the opinions of others while communicating through sign language with their peers. The fundamental method to synthesize results from the different data-gathering methods was Meaning Condensation, as described in (Bjorner 2016).

3.3.2. Concert Results

3.3.2.1. Focus Group

Once the concert was completed, the CI users were invited to answer questions regarding their experience and their preferences. The interview was recorded and transcribed, but due to the aforementioned issue with noise in the room that resulted in participants resorting to using sign language, only a limited amount of data were obtained through this method and are treated as coming from a collective participant.
The participant group mentioned that the concert was “very exciting and fun”, that they could “almost hear the voice”, and jokingly continued that they would have liked a whole hour concert instead of the limited time allocated. Furthermore, they collectively expressed their preference for the vibrotactile furniture over the hand grip, arguing that the latter’s signal is too complex, and it is much easier to understand distinct source streams, as was the case for the tilted benches and footrests. This setup allowed participants to witness an experience of “loud bass” without saturating the compressor in the CI, and participants mentioned that they usually have problems in this regard when attending regular concerts. Nevertheless, one CI user complained that they could not feel the drums, and when explained that the drums were only acoustical, they expressed their wish for this signal to be represented as vibrotactile as well. Lastly, the same participant mentioned that once they felt the tactile stimulation generated by the singer’s laughter (in between songs), they could “tune in” and better understand the vibrotactile sensation.
Towards the end of the focus group, the participants mentioned that the balance between the instruments’ volume, both acoustic and tactile, was pleasant, and it made sense to them, but complained that sometimes they could feel the stimulation from the double bass in the benches, which was supposed to process only the vocal signal. This was an unfortunate effect due to the stage being small, so the bass sounds were picked up by the vocalist’s microphone. On top of this bleed, the generous amount of compression and makeup gain only exacerbated the unwanted signal in the tilted benches. Lastly, the participant group collectively mentioned that, towards the end (after the encore), the tilted benches became slightly overwhelming. This was the result of an error we made when bringing back the tactile volume after the encore; while the person in charge of levels was confident that the mixing faders were at the same level, the participants reported otherwise, further emphasizing the importance of a good balance and mix between all types of signals.

3.3.2.2. Video-Based Observations

The entire concert was recorded using a Zoom Q8 video camera6 mounted on a tripod and aimed at the two installations; a screenshot from the recording can be seen in Figure 7. The observation analysis focused on reporting the interaction with the vibrotactile furniture and the hand grip during the concert.
Once the music started, the four CI participants using vibrotactile furniture at the time signified a good reception of rhythm by starting to tap their feet or nod their heads, while experimenting frequently with posture adjustments, rotation of positions between them, and making room for others to use the tilted benches. Throughout the event, the CI users used sign language to communicate with one another, resulting in a great deal of interaction and laughter. The hand grip was explored similarly but did not elicit the same head bobbing, or foot tapping, and the interaction with it was shorter than the tilted benches; in the beginning, it was several seconds long, but as the concert evolved, some CI participants spent minutes at a time interacting with it. Nevertheless, while the tilted benches were in use 100% of the time, the hand grip was not.
During the second song, participants slowed down the frequent adjustments and seemed to focus more on the music, probably because they had already found some comfortable postures, but they continued to communicate through sign language. One participant was particularly interested in the entire system—exploring different postures and interaction methods, including grabbing both benches, the hand grip, and a bench concomitantly, touching the bench and the footrest at the same time (as seen in Figure 7, and even covering their ears. Two participants even experimented with the vibrotactile furniture barefooted.
As the concert progressed, the interaction with the hand grip slowed down even more, but the tilted benches were constantly in use, with more and more time being spent on them. This is probably because the novelty effect was slowly wearing off, and they could focus on the music. Towards the end, two participants in particular (that seemed to be friends) experienced the last three songs on the benches, without interruption. On the other hand, one participant sat right next to the benches, constantly touching them with her hand, while still engaging with her peers through sign language.

3.3.2.3. Online Interview

In order to validate some of the data gathered through the interviews and video-based observations, an online written interview was sent to the CI participants via email, 10 days after the event. In addition to validation, these interviews aimed to verify the midterm impression of the concert, the attitude towards it, and to receive more feedback that could have been overlooked.
The interview consisted of six questions:
  • What was it like attending the concert with the vibrating furniture?
  • How was it feeling to sit on the furniture during the concert?
  • How did you experience the connection between music and vibrations?
  • Which instrument did you prefer and why?
  • Would you recommend this type of concert furniture to other CI users?
  • Do you have any other comments?
Two participants responded to the interview, reiterating that the experience was fun and nothing like I have tried before, adding that one could almost hear better and sense it (the music). Furthermore, both participant respondents mentioned that they preferred the bass instrument, as it created a good sense of rhythm, and both mentioned that they would have liked to have the drums reproduced through the vibrotactile furniture. On the flip side, they both remembered the microphone bleed as a problem and concluded that the hand grip did not add anything to the experience. One respondent mentioned that the vibrotactile furniture became a bit overwhelming at some point, while the other reported that it was strong, but not uncomfortable. Lastly, a respondent said that the furniture might take up too much space but appreciated the fact that one can get a good experience by interacting with their hands as well. Both of them concluded that they would recommend this type of concert to others, and one extended the idea to include visual support that would indicate which instruments are being represented through the tactile modality.

3.3.2.4. Limitations

By organizing an open and public event, we aimed to increase the validity by ensuring an ecological evaluation, but this came with its own challenges and limitations. First and foremost, it was hard to estimate the number of participants who would be using a CI or have a good understanding of each participant’s hearing characteristics. We tried to promote the event through the relevant channels, targeting the CI community directly, but only about 10% of the participants were in our target group. In addition, it is safe to assume that the CI users that decided to participate in the event are those seeking musical experiences, resulting in some degree of positive bias towards the experience. As an additional limitation, the participants told us that other CI users from their network were interested in attending but a bit frightened by the fact that the event was running in English. This is an aspect to take into account for future iterations.
Another limitation regarding the data collection method was the assumption that we would be able to interview the CI users present in the same environment. As mentioned above, the noise floor in the room was high enough to discourage some CI users from talking, instead resorting to relaying their input through sign language via their peers. This limitation could have easily been avoided if a dedicated and silent room had been prepared a priori.
Lastly, because the participants were encouraged to move around when interacting with the vibrotactile systems, there was a considerable amount of occlusion in the video recordings, preventing us from observing all the details of their interaction. A multicamera setup could have solved this issue, but unfortunately, we did not prepare for that scenario.

4. Discussion

Combining the results from the three different data-gathering methods allows us to draw preliminary conclusions regarding the experience of the CI users. First and foremost, it is important to highlight the fact that the multisensory concert was well-received, as can be deduced from analyzing all three data channels independently. However, this finding should not be generalized, as our participant pool was limited and, as mentioned in Section 3.3.2.4, CI users who attended could be positively biased towards openly searching for musical experiences in general. Nevertheless, the data we collected strongly indicate that a multisensory concert experience is appreciated, especially for the CI users who avoid concerts due to the poor musical reception the device produces. One of the most important factors that contributed to this is the flexibility the vibrotactile furniture provided; as observed from the video recording, participants experimented with a multitude of postures in the beginning, but as the concert progressed, they settled into the seemingly most comfortable one, using their multiple points of contact. Out of these, it seems that the hands are preferred for sensing vibrotactile stimulation, especially when dealing with mid–high frequency content such as voice. Another appreciated factor in the multisensory concert is the ability to experience loud music, generally associated with concerts, without fatigue. As mentioned by (Zeng 2004), the dynamic range for electrical stimulation is frequency-dependent but always lies between 10–30 dB, as opposed to approx. 120 dB for a normal-hearing individual. This reduced dynamic range can be perceptually extended by augmenting the auditory signal with vibrotactile stimulation (Aker et al. 2022), thus allowing us to create experiences that feel loud. Since bass frequencies usually carry the most energy, participants associated the increased loudness with this frequency range without being overwhelming. Lastly, it was positive to acknowledge that participants could associate and integrate the vibrotactile stimuli generated by the vocalist with the electric stimulation provided by the CI. We expect this integration to enhance with a longer exposure time, resulting in better multisensory concert experiences.
An important factor worth discussing is the importance of multisensory processing and mixing. It is paramount to use limiting amplifiers in order to avoid tactile sensations from becoming overwhelming but at the same time high enough to be perceived. As mentioned by (Hopkins et al. 2016), the dynamic range for tactile stimulation reception is fairly limited, between 36 and 47 dB, depending on the area, further emphasizing the importance of strict dynamic processing and mixing. Our observations fall in line with (Hopkins et al. 2016) regarding the difference in body-area-dependent comfort levels at the top end of vibrotactile amplitude, as observed when one participant decided to sit next to vibrotactile furniture while still holding her hand on the benches. Furthermore, both in the postexperience interview and in the online interview, it was mentioned that vibrotactile stimulation did become overwhelming. This was pointed out as happening in the encore stages of the concert after the mixing engineer had reset the tactile amplitude to supposedly the same level as during the sound check, indicating that the range between pleasant and uncomfortable is very small.
Since the vibrotactile furniture was used considerably more than the hand grip, even though the participants tried extensive hand interaction with the benches, it is safe to conclude that the problem with the latter one is with the signals it reproduced. We are confident to suggest that mixing auditory streams into one tactile display is not an efficient technique, especially since the hand grip was the preferred display in the participatory design workshop Section 3.1 when it was reproducing vocal stimulation alone. The importance of this one-to-one mapping is also expressed throughout the two interviews, when participants complained that they could “feel” the double bass signal through the benches, which were supposed to reproduce only the vocal signal. It was also interesting to observe one participant that tried to feel the two vibrotactile actuators, hand grip and chair, at the same time to compare the vibrations.
As the concert was part of a larger workshop, many hearing-aid users and children with cochlear implants attended as well. While these two groups are not the focus of this study, it is important to note that hearing-aid users found the vibrotactile stimulation distracting and potentially annoying, resulting in a much shorter interaction with the systems, while children avoided them entirely. Young CI users, on the other hand, were more enthusiastic about virtual reality experiences and computer/tablet-based applications designed for the implanted population.

5. Conclusions

This article described how we applied a user-centric participatory design method to create a multisensory concert experience including audio, video, and vibrotactile stimulation. The vibrotactile furniture designed focused on flexibility and user choice, as opposed to most previous work that employed a fixed-posture experience. Through the design process, we have concluded that it is fundamental to acknowledge individual CI user needs in the design process. These needs could be related to their hearing characteristics and, just as importantly, to their attitude towards music listening in general. Furthermore, we observed that when designing multiactuator systems, it is important to consider the body area actuated, as different zones have different vibrotactile sensing characteristics.’
Two vibrotactile displays were premiered at the Danish Royal Academy of Music during a workshop created for people with hearing impairment. More than 50 participants participated in the event and 5 of them were CI users. The concert lasted 25 min and presented adaptations of 6 popular songs from different generations, interpreted by a female vocalist, with double bass and drums as accompaniment. The evaluation focused on the usability of the displays and the multisensory concert experience as a whole, and it indicated that such a concert concept is welcomed by the CI community as it provides several benefits, such as better stream separation. Although the concert experience was welcomed, our evaluation concluded that audio-vibrotactile mixing is crucial, and small adjustments can result in an overwhelming experience, as is to be expected from the limited dynamic range both the CI and the tactile receptors have. Lastly, when it comes to multiactuator displays, the participants had a strong preference towards a one-to-one mapping between musical instruments and body areas actuated. This idea has been explored and confirmed by many previous researchers working with a distribution of speaker based on The Model Human Cochlea (Karam et al. 2009), which suggests to employ a positive correlation between stimuli frequency and the vertical distribution of vibrotactile stimulation. It is also important to take into consideration their taste and preferences in terms of leisure activities. As a matter of fact, the CoolHear Workshop included both a demo session with different gamified experiences and a concert. We observed that younger audience members preferred to engage in the gamified experiences and skipped the concert, while older participants engaged in the concert but did not enjoy the gamified experiences, especially those including a virtual reality display.
The work presented in this article suggests that multisensory live music does have the potential to improve the concert experience for CI users, but the field is in its infancy. Future work should focus on further understanding what factors are responsible for such an improvement, as well as continuing to involve CI users int the vibrotactile display design process as early as possible.

Author Contributions

S.S. and R.P. worked on the conceptualization of the project; the methodology was developed by R.P., F.G., D.C. and P.W.; F.G. and P.W. were part of the band playing in the concert; the workshop experiments were run by R.P., F.G. and D.C. with help from P.W. The investigation, formal analysis, data curation, resources, and software were carried out by R.P., as well as writing the original draft. S.S. was responsible for supervision, administrating the project, and acquiring the necessary funding; L.M.P.-S. co-supervised and contributed to the data gathering, analysis, and interpretation, as well as recruiting participants. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Nordfosrk Nordic University Hub through the Nordic Sound and Music project (number 86892).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank Tonmeister Jesper Andersen from the Danish Royal Academy of Music for hosting our event and providing his invaluable insight.

Conflicts of Interest

The authors declare no conflict of interest.

Notes

1
https://subpac.com/ (accessed on 22 Febraury 2023).
2
Facebook CI Group (accessed on 1 October 2021).
3
https://thebuttkicker.com/ (accessed on 22 February 2023).
4
http://tactilelabs.com/ (accessed on 22 February 2023).
5
6
https://zoomcorp.com/ (accessed on 22 February 2023).

References

  1. Aker, Scott C., Hamish Innes-Brown, Kathleen F. Faulkner, Marianna Vatti, and Jeremy Marozeau. 2022. Effect of audio-tactile congruence on vibrotactile music enhancement. The Journal of the Acoustical Society of America 152: 3396–409. [Google Scholar] [CrossRef] [PubMed]
  2. Aydın, Diler, and Suzan Yıldız. 2012. Effect of classical music on stress among preterm infants in a neonatal intensive care unit. HealthMED 6: 3162–68. [Google Scholar]
  3. Baijal, Anant, Julia Kim, Carmen Branje, Frank A. Russo, and Deborah I. Fels. 2015. Composing vibrotactile music: A multisensory experience with the emoti-chair. CoRR, 509–15. [Google Scholar] [CrossRef] [Green Version]
  4. Başkent, Deniz, and Robert V. Shannon. 2005. Interactions between cochlear implant electrode insertion depth and frequency-place mapping. The Journal of the Acoustical Society of America 117: 1405–16. [Google Scholar] [CrossRef] [PubMed]
  5. Bjorner, T. 2016. Qualitative Methods for Consumer Research: The Value of the Qualitative Approach in Theory and Practice. Copenhagen: Hans Reitzels Forlag. [Google Scholar]
  6. Buyens, Wim, Bas Van Dijk, Marc Moonen, and Jan Wouters. 2014. Music mixing preferences of cochlear implant recipients: A pilot study. International Journal of Audiology 53: 294–301. [Google Scholar] [CrossRef] [PubMed]
  7. Cavdir, Doga, Francesco Ganis, Razvan Paisa, Peter Williams, and Stefania Serafin. 2022. Multisensory Integration Design in Music for Cochlear Implant Users. Paper present at the 19th Sound and Music Computing Conference, Saint-Étienne, France, June 5–12. [Google Scholar] [CrossRef]
  8. Cavdir, Doga, and Ge Wang. 2020. Felt sound: A shared musical experience for the deaf and hard of hearing. Paper present at the 20th International Conference on New Interfaces for Musical Expression, Birmingham, UK, July 25–27. [Google Scholar]
  9. Certo, Michael V., Gavriel D. Kohlberg, Divya A. Chari, Dean M. Mancuso, and Anil K. Lalwani. 2015. Reverberation time influences musical enjoyment with cochlear implants. Otology and Neurotology 36: e46–e50. [Google Scholar] [CrossRef]
  10. Coffman, Don. 2002. Music and quality of life in older adults. Psychomusicology: A Journal of Research in Music Cognition 18: 76–88. [Google Scholar] [CrossRef]
  11. Dickens, Amy, Chris Greenhalgh, and Boriana Koleva. 2018. Facilitating accessibility in performance: Participatory design for digital musical instruments. Journal of the Audio Engineering Society 66: 211–19. [Google Scholar] [CrossRef]
  12. Dorman, Michael F., Louise Loiselle, Josh Stohl, William A. Yost, Anthony Spahr, Chris Brown, and Sarah Cook. 2014. Interaural level differences and sound source localization for bilateral cochlear implant patients. Ear and Hearing 35: 633. [Google Scholar] [CrossRef] [Green Version]
  13. Dorman, Michael F., Louise H. Loiselle, Sarah J. Cook, William A. Yost, and René H. Gifford. 2016. Sound source localization by normal-hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiology and Neurotology 21: 127–31. [Google Scholar] [CrossRef] [Green Version]
  14. Dritsakis, Giorgos, Rachel M. van Besouw, Pádraig Kitterick, and Carl A. Verschuur. 2017. A music-related quality of life measure to guide music rehabilitation for adult cochlear implant users. American Journal of Audiology 26: 268–82. [Google Scholar] [CrossRef] [PubMed]
  15. Dritsakis, Giorgos, Rachel M. van Besouw, and Aoife O’ Meara. 2017. Impact of music on the quality of life of cochlear implant users: A focus group study. Cochlear Implants International 18: 207–15. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Fletcher, Mark D., Nour Thini, and Samuel W. Perry. 2020. Enhanced pitch discrimination for cochlear implant users with a new haptic neuroprosthetic. Scientific Reports 10: 10354. [Google Scholar] [CrossRef]
  17. Fontana, Federico, Ivan Camponogara, Paola Cesari, Matteo Vallicella, and Marco Ruzzenente. 2016. An Exploration on Whole-body and Foot-based Vibrotactile Sensitivity to Melodic Consonance. Paper present at the 13th Sound and Music Computing Conference (SMC2016), Hamburg, Germany, August 31–September 3. [Google Scholar] [CrossRef]
  18. Foxe, John, Glenn Wylie, Antigona Martinez, Charles Schroeder, Daniel Javitt, David Guilfoyle, Walter Ritter, and Micah Murray. 2002. Auditory-somatosensory multisensory processing in auditory association cortex: An fmri study. Journal of Neurophysiology 88: 540–43. [Google Scholar] [CrossRef] [PubMed]
  19. Frid, Emma. 2019. Accessible digital musical instruments—A review of musical interfaces in inclusive music practice. Multimodal Technologies and Interaction 3: 57. [Google Scholar] [CrossRef] [Green Version]
  20. Frosolini, Andrea, Giulio Badin, Flavia Sorrentino, Davide Brotto, Nicholas Pessot, Francesco Fantin, Federica Ceschin, Andrea Lovato, Nicola Coppola, Antonio Mancuso, and et al. 2022. Application of patient reported outcome measures in cochlear implant patients: Implications for the design of specific rehabilitation programs. Sensors 22: 8770. [Google Scholar] [CrossRef] [PubMed]
  21. Fuller, Christina D., John J. Galvin III, Bert Maat, Deniz Başkent, and Rolien H. Free. 2018. Comparison of two music training approaches on music and speech perception in cochlear implant users. Trends in Hearing 22: 2331216518765379. [Google Scholar] [CrossRef] [Green Version]
  22. Galvin, John J., III, Elizabeth Eskridge, Sandy Oba, and Qian-Jie Fu. 2012. Melodic contour identification training in cochlear implant users with and without a competing instrument. In Seminars in Hearing. New York: Thieme Medical Publishers, vol. 33, pp. 399–409. [Google Scholar]
  23. Gfeller, Kate, Virginia Driscoll, and Adam Schwalje. 2019. Adult cochlear implant recipients’ perspectives on experiences with music in everyday life: A multifaceted and dynamic phenomenon. Frontiers in Neuroscience 13: 1229. [Google Scholar] [CrossRef] [Green Version]
  24. Gosine, Jane, Deborah Hawksley, and Susan LeMessurier Quinn. 2017. Community building through inclusive music-making. Voices: A World Forum for Music Therapy 17. [Google Scholar] [CrossRef] [Green Version]
  25. Hinderink, Johannes B., Paul F. M. Krabbe, and Paul Van Den Broek. 2000. Development and application of a health-related quality-of-life instrument for adults with cochlear implants: The nijmegen cochlear implant questionnaire. Otolaryngology-Head and Neck Surgery 123: 756–65. [Google Scholar] [CrossRef]
  26. Holden, Laura K., Jill B. Firszt, Ruth M. Reeder, Rosalie M. Uchanski, Noël Y. Dwyer, and Timothy A. Holden. 2016. Factors affecting outcomes in cochlear implant recipients implanted with a perimodiolar electrode array located in scala tympani. Otology and Neurotology 37: 1662–68. [Google Scholar] [CrossRef] [Green Version]
  27. Hopkins, Carl, Saúl Maté-Cid, Robert Fulford, Gary Seiffert, and Jane Ginsborg. 2016. Vibrotactile presentation of musical notes to the glabrous skin for adults with normal hearing or a hearing impairment: Thresholds, dynamic range and high-frequency perception. PLoS ONE 11: e0155807. [Google Scholar] [CrossRef] [Green Version]
  28. Jack, Robert, Andrew Mcpherson, and Tony Stockman. 2015. Designing tactile musical devices with and for deaf users: A case study. Paper present at the International Conference on the Multimedia Experience of Music, Sheffield, UK, March 23–25. [Google Scholar]
  29. Jones, Lynette A., and Nadine B. Sarter. 2008. Tactile displays: Guidance for their design and application. Human Factors 50. [Google Scholar] [CrossRef]
  30. Jousmäki, V., and R. Hari. 1998. Parchment-skin illusion: Sound-biased touch. Current Biology 8. [Google Scholar] [CrossRef] [Green Version]
  31. Karam, Maria, Frank Russo, and Deborah Fels. 2009. Designing the model human cochlea: An ambient crossmodal audio-tactile display. IEEE Transactions on Haptics 2: 160–69. [Google Scholar] [CrossRef]
  32. Kayser, Christoph, Christopher Petkov, Mark Augath, and Nikos Logothetis. 2005. Integration of touch and sound in auditory cortex. Neuron 48: 373–84. [Google Scholar] [CrossRef] [Green Version]
  33. Lucas, Alex, Miguel Ortiz, and Franziska Schroeder. 2020. The longevity of bespoke, accessible music technology: A case for community. Paper present at the International Conference on New Interfaces for Musical Expression, Birmingham, UK, July 21–25; pp. 243–48. [Google Scholar]
  34. Lucas, Alex Michael, Miguel Ortiz, and Franziska Schroeder. 2019. Bespoke Design for Inclusive Music: The Challenges of Evaluation. Paper present at the 19th International Conference on New Interfaces for Musical Expression, Porto Alegre, Brasil, June 3–6; pp. 105–9. [Google Scholar]
  35. Marti, Patrizia, and Annamaria Recupero. 2019. Is deafness a disability? Designing hearing aids beyond functionality. Paper present at the 2019 on Creativity and Cognition, San Diego, CA, USA, June 23–26; pp. 133–43. [Google Scholar]
  36. McDermott, Hugh J. 2004. Music perception with cochlear implants: A review. Trends in Amplification 8: 49–82. [Google Scholar] [CrossRef] [Green Version]
  37. McRackan, Theodore R., Brittany N. Hand, Craig A. Velozo, Judy R. Dubno, Justin S. Golub, Eric P. Wilkinson, Dawna Mills, John P. Carey, Nopawan Vorasubin, Vickie Brunk, and et al. 2019. Cochlear implant quality of life (ciqol): Development of a profile instrument (ciqol-35 profile) and a global measure (ciqol-10 global). Journal of Speech, Language, and Hearing Research 62: 3554–563. [Google Scholar] [CrossRef]
  38. Mitchell, Laura, Raymond MacDonald, Christina Knussen, and Mick Serpell. 2007. A survey investigation of the effects of music listening on chronic pain. Psychology of Music 35: 37–57. [Google Scholar] [CrossRef]
  39. Nanayakkara, Suranga, Elizabeth Taylor, Lonce Wyse, and Sim Ong. 2009. An enhanced musical experience for the deaf: Design and evaluation of a music display and a haptic chair. Paper present at the CHI’09: CHI Conference on Human Factors in Computing Systems, Boston, MA, USA, April 4–9; pp. 337–46. [Google Scholar] [CrossRef]
  40. Nanayakkara, Suranga, Lonce Wyse, Sim Ong, and Elizabeth Taylor. 2012. Enhancing musical experience for the hearing-impaired using visual and haptic displays. Human-Computer Interaction 28: 115–60. [Google Scholar]
  41. Nordahl, Rolf, Stefania Serafin, Niels C. Nilsson, and Luca Turchet. 2012. Enhancing realism in virtual environments by simulating the audio-haptic sensation of walking on ground surfaces. Paper present at the 2012 IEEE Virtual Reality Workshops (VRW), Costa Mesa, CA, USA, March 4–8. [Google Scholar] [CrossRef] [Green Version]
  42. Oxenham, Andrew J. 2008. Pitch perception and auditory stream segregation: Implications for hearing loss and cochlear implants. Trends in Amplification 12: 316–31. [Google Scholar] [CrossRef] [Green Version]
  43. Paisa, R., J. Andersen, N. C. Nilsson, and S. Serafin. 2022. A comparison of audio-to-tactile conversion algorithms for melody recognition. Paper present at the Baltic Nordic-Acoustic Meetings, Aalborg, Denmark, May 9–11. [Google Scholar]
  44. Quintero, Christian. 2020. A review: Accessible technology through participatory design. Disability and Rehabilitation: Assistive Technology 17: 369–75. [Google Scholar] [CrossRef]
  45. Rouger, J., S. Lagleyre, B. Fraysse, S. Deneve, O. Deguine, and P. Barone. 2007. Evidence that cochlear-implanted deaf patients are better multisensory integrators. Proceedings of the National Academy of Sciences of the United States of America 104: 7295–300. [Google Scholar] [CrossRef]
  46. Samuels, Koichi, and Franziska Schroeder. 2019. Performance without barriers: Improvising with inclusive and accessible digital musical instruments. Contemporary Music Review 38: 476–89. [Google Scholar] [CrossRef]
  47. Schellenberg, E. Glenn. 2016. Music and Nonmusical Abilities. In The Child as Musician: A Handbook of Musical Development, 2nd ed. Oxford: Oxford University Press, pp. 149–76. [Google Scholar] [CrossRef]
  48. Schäfer, Thomas, Peter Sedlmeier, Christine Städtler, and David Huron. 2013. The psychological functions of music listening. Frontiers in Psychology 4: 511. [Google Scholar] [CrossRef] [Green Version]
  49. Someren, Maarten, Yvonne Barnard, and Jacobijn Sandberg. 1994. The Think Aloud Method—A Practical Guide to Modelling Cognitive Processes. London: Academic Press. [Google Scholar]
  50. Stein, Barry E., and M. Alex Meredith. 1993. The Merging of the Senses. A Bradford Book. Cambridge: MIT Press. [Google Scholar]
  51. Tilley, A. R., Henry Dreyfuss, and Henry Dreyfuss Associates. 1993. The Measure of Man and Woman: Human Factors in Design. New York: Whitney Library of Design. [Google Scholar]
  52. Wilska, Alvar. 1954. On the vibrational sensitivity in different regions of the body surface. Acta Physiologica Scandinavica 31: 285–89. [Google Scholar] [CrossRef]
  53. Wilson, Blake S., and Michael F. Dorman. 2008. Cochlear implants: A remarkable past and a brilliant future. Hearing Research 242: 3–21. [Google Scholar] [CrossRef] [Green Version]
  54. Zeng, Fan-Gand. 2004. Compression and Cochlear Implants. New York: Springer, pp. 184–220. [Google Scholar] [CrossRef]
  55. Zeng, Fan Gang, and John J. Galvin. 1999. Amplitude mapping and phoneme recognition in cochlear implant listeners. Ear and Hearing 20: 60–74. [Google Scholar] [CrossRef]
  56. Zeng, Fan-Gang, Ginger Grant, John Niparko, John Galvin, Robert Shannon, Jane Opie, and Phil Segel. 2002. Speech dynamic range and its effect on cochlear implant performance. The Journal of the Acoustical Society of America 111: 377. [Google Scholar] [CrossRef] [Green Version]
  57. Črnčec, Rudi, Sarah Wilson, and Margot Prior. 2006. The cognitive and academic benefits of music to children: Facts and fiction. Educational Psychology EDUC PSYCHOL-UK 26: 579–94. [Google Scholar] [CrossRef]
Figure 1. Diagram of installation 1.
Figure 1. Diagram of installation 1.
Arts 12 00149 g001
Figure 2. Example configurations experienced during the workshop.
Figure 2. Example configurations experienced during the workshop.
Arts 12 00149 g002
Figure 3. Different posture suggestions engraved on the footrests.
Figure 3. Different posture suggestions engraved on the footrests.
Arts 12 00149 g003
Figure 4. Leaning benches and footrests.
Figure 4. Leaning benches and footrests.
Arts 12 00149 g004
Figure 5. Tactile stimulation signal chain.
Figure 5. Tactile stimulation signal chain.
Arts 12 00149 g005
Figure 6. Vibrotactile furniture in use during the concert.
Figure 6. Vibrotactile furniture in use during the concert.
Arts 12 00149 g006
Figure 7. Exploratory interaction with the tilted benches and footrest.
Figure 7. Exploratory interaction with the tilted benches and footrest.
Arts 12 00149 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paisa, R.; Cavdir, D.; Ganis, F.; Williams, P.; Percy-Smith, L.M.; Serafin, S. Design and Evaluation of a Multisensory Concert for Cochlear Implant Users. Arts 2023, 12, 149. https://doi.org/10.3390/arts12040149

AMA Style

Paisa R, Cavdir D, Ganis F, Williams P, Percy-Smith LM, Serafin S. Design and Evaluation of a Multisensory Concert for Cochlear Implant Users. Arts. 2023; 12(4):149. https://doi.org/10.3390/arts12040149

Chicago/Turabian Style

Paisa, Razvan, Doga Cavdir, Francesco Ganis, Peter Williams, Lone M. Percy-Smith, and Stefania Serafin. 2023. "Design and Evaluation of a Multisensory Concert for Cochlear Implant Users" Arts 12, no. 4: 149. https://doi.org/10.3390/arts12040149

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop