Next Article in Journal
Canon, Value, and Cultural Heritage: New Processes of Assigning Value in the Postdigital Realm
Previous Article in Journal
A Study on the Use of Eye Tracking to Adapt Gameplay and Procedural Content Generation in First-Person Shooter Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sound Descriptions of Haptic Experiences of Art Work by Deafblind Cochlear Implant Users

1
ISE Research Group, Faculty of Educational Sciences, University of Helsinki, 00100 Helsinki, Finland
2
Fashion/Textile Futures Research Group, Department of Design, Aalto University School of Arts, Design and Architecture, 00560 Helsinki, Finland
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2018, 2(2), 24; https://doi.org/10.3390/mti2020024
Submission received: 1 April 2018 / Revised: 24 April 2018 / Accepted: 8 May 2018 / Published: 11 May 2018

Abstract

:
Deafblind persons’ perception and experiences are based on their residual auditive and visual senses, and touch. Their haptic exploration, through movements and orientation towards objects give blind persons direct, independent experience. Few studies explore the aesthetic experiences and appreciation of artefacts of deafblind people using cochlear implant (CI) technology, and how they interpret and express their perceived aesthetic experience through another sensory modality. While speech recognition is studied extensively in this area, the aspect of auditive descriptions made by CI users are a less-studied domain. This present research intervention describes and analyses five different deafblind people sharing their interpretation of five statues vocally, using sounds and written descriptions based on their haptic explorations. The participants found new and multimodal ways of expressing their experiences, as well as re-experiencing them through technological aids. We also found that the CI users modify technology to better suit their personal needs. We conclude that CI technology in combination with self-made sound descriptions enhance memorization of haptic art experiences that can be re-called by the recording of the sound descriptions. This research expands the idea of auditive descriptions, and encourages user-produced descriptions as artistic supports to traditional linguistic, audio descriptions. These can be used to create personal auditive–haptic memory collections similar to how sighted create photo albums.

1. Introduction

Blind touch and the experiential knowledge of blind persons and their felt experience of the world have been utilised in spatial and sensory research in a variety of creative fields from architecture [1,2,3,4,5,6], design and craft [7,8,9,10], and philosophy [11], to human geography and ethnography [12,13,14,15]. Through the deafblind participants’ condition, sighted people can gain insights into fundamental aspects of our living environment that are often concealed from the sighted, as the sighted often take haptic experiences for granted [16]. In this article, one of the writers is a native cochlear implant (CI) user and blind. Through his experience, we get closer to what this user group’s experiences and needs are. As we have unique access to the participants’ haptic experiences, there was an opportunity to capture the process of creating sound descriptions from a user experience perspective.
In our study, the participants had multiple modal impairments; thus, the participant’s use of haptic exploration was central to our ethnographic research. Gibson [17] introduced the concept of haptics as an extension to the word ‘touch’. The haptic sensory system is a wider understanding of the sense of touch and Gibson describes the haptic system as follows: “the sensibility of the individual to the world adjacent to his body by the use of his body” [17] (p. 97). The concept includes the person’s deliberate and active movements, balance and orientation as well as proprioception [17] (pp. 36–37).
The CI is a hearing aid device that partly operates underneath the skin of the user.
CI surgery started in the early 1970s, when the focus was mainly on speech recognition. Since then, CI processor technology has been developed and expanded to a level where even musical perception has become identifiable. For the past 10 years, research has been concentrating on deaf children’s speech, music and pitch perception, singing and playing of different instruments [18,19] and also the perception of music by deafened postlingual adults [20,21]. Recordings of these data were aimed at sighted deaf or deafened people. There appears to be very little, if any, research on deafblind cochlear implant users and their sound exploration and production.
As the tactual environment of the deafblind person is so important, the CI users’ haptic senses are heightened and sounds may also be perceived through vibrations in the air or through different media, therefore music can be one means for enjoyment and self-expression. Haptic hobbies such as ceramic crafts, linoleum cutting and printing as well as cooking classes that concentrate on taste and smell perceptions are also popular with deafblind CI users [8]. This user group’s love of arts and culture is sustained through environmental guiding and technology that support their activities, such as audio description pages, portable induction loops, neck loops and radio receivers.
Similarly, human guiding and audio descriptions aid the deafblind CI user in their explorations of artistic objects and events. When it comes to more fundamental aspects of experience, human touch and showing through movements might get closer to the natural experience of events. For example, statues in a museum that cannot be touched because of restrictions, can be experienced by re-enacting the postures of the sculptures together with a sighted person [22]. A deafblind person may experience art forms through their hands and body and sensing vibrations which can be felt from the art work through touching. When exploring the artefact using different tapping motions this may produce different vibrations and sounds depending upon whether the artefact is hollow or solid, referred to as a vibrosensoric experience [23].
Another study of blind peoples’ art making experiences found that the participants explained their aesthetic elements using musical concepts, such as scale, shape, line and rhythm [24] (pp. 87–89), indicating that there seem to be a natural link between the elements of space, shapes and musical concepts. This is something that has also been noticed in research within the area of blindness [25] (p. 231), because as sound comes from different directions it simultaneously indicates the space between the source of the sound and the receiving person.

1.1. Deafblindness and Perceived CI Sounds

In this article, we focus on the haptic experiences of deafblind CI users and on their modes of communication of their experiences. We ask what practical benefits self-made sound descriptions could have for deafblind CI users in their haptic explorations of art works. Acquired deafblindness is a general term describing a group of very different, severely visually and hearing impaired dual-sensory individuals. According to the Nordic definition of deafblindness (see link in the Supplementary Materials) this is a combined vision and hearing impairment of such severity that it is hard for the impaired senses to compensate for each other. Thus, deafblindness is a distinct disability. Usher syndrome is a common reason for vision and sight to deteriorate over time. All participants in this research have Usher syndrome, commonly called deafblindness, here after we will use this term to describe the condition of our research participants.
During the last ten years, CI technology has been developed extensively, and through these devices some of the participants’ auditive perception have been partly restored or stimulated. In this study, four deafblind people were using CI technology and one participant was using behind the ear hearing aids. CI users describe the sound and musical perception perceived through the CI as quite different to natural and direct sounds. With the CI, there is an additional extra whistling and hissing sound. It is not a perfect sound, and it is something that the user has to get used to. CI users have their own settings on their devices and can adjust their hearing experiences by using a remote control device and change the volume, sensitivity or select a specific program which changes the frequency. Russ Palmer, a deafblind CI user who is also one of the authors of this article, explains this effect, saying that he was a keen piano player before getting his CI installed: “I used to play piano, but I stopped because the CI makes the sound so different, like a honkey tonky out of tune piano”. The technology thus helps with basic needs but fails to provide the appropriate level of sensitivity in situations, missing the qualitative experiences so important in the arts.

1.2. Talking Devices and Computer-Assisted Technology

Deafblind CI users are able to use technological devices that create artificial speaking sounds. There are many talking devices to assist blind people in receiving information through speaking voices such as talking watches, clocks, mobile phones and text-to-speech programs. One device is the audio labeller that is used to label CDs, DVDs, files or documents along with certain food items so that blind people are able to locate and identify items quickly, efficiently and independently. This consists of recording one’s message onto a recording device; the message is then copied onto a magnetic label. By positioning the device over the magnetic label, the recorded message will be played back through the same device, which contains a small built-in speaker. This device allows you to record your own voice, which is easier to understand compared to listening to a synthetised electronic voice that may be in unintelligible because of a different dialect or choice of tones and words.
A direct connecting device between the CI and an electrical device reduces background noise and echoing sounds. This direct connecting device is easier to use compared to headphones, which can be uncomfortable when using CIs. However, when the device is connected to an audio medium, the CI user only hears the sound that is being played through the device and nothing else, for example speech by others in the same room cannot be heard. These aspects of usability are only experienced in prolonged user experiences but perhaps seldom reported back in research. The lack of technological usability is something that the CI user either needs to put up with or find their own solution to.
In finding solutions for technological malfunction or issues related to user experiences, the solution may be an individual “workaround” or something that several users have found useful (see also [7]). For example, deafblind people using CIs may enhance their acoustic environment through using a hat to deflect unwanted sounds (see Figure 1). The use of a wide hat also helps them to balance their own voice when producing vocalised sounds [26].
One of the major problems for CI users using hearing aid devices is the background noise, for example when listening to music or operating smart devices. To do this, there are three options: listen to the device through a telecoil/induction loop, a direct connection or a bluetooth screen reader and voice command software. These are referred to as talking software. However, learning how to use these devices may not be straightforward for the user as each manufacturer has its own operational strategy. For example, different computer operation systems are completely different in the way they operate, and it may be necessary to use devices from different platforms (PC, Mac, Linux) to suit one’s needs.
Due to their dual modality impairment, deafblind persons generally have difficulty in perceiving their surrounding environment from a distance and they often need help through an assistant or interpreters. Usually, an audial and environmental description is made by sighted people, sign language interpreters or personal assistants, and blind people are the recipients [27]. Riitta Lahtinen [22] has previously studied how to interpret visual arts onto the body of deafblind people using touch and haptices. In her doctoral thesis on social–haptic communication she invented the words haptices and haptemes [22]. Hapteces are spoken words that are communicated through touch on the body and haptemes are the grammar of these touch messages, for example pressure, movement, direction, etc. Together they form a social–haptic communication, that in its original use (basic haptices and haptemes) function as support for verbal communication but also include environmental descriptions that can be used in connection to visual arts (paintings, still photographs etc.). This method allows for a more human approach to aiding deafblind individuals.

1.3. Multimodality

Multimodality studies argue that humans make meaning and communicate meaning in multiple ways, often through many different modes and channels, such as writing, speech, or gesturing, but also through body movements and voice intonations or even artistic expressions such as dance, singing or music making. These means of communication often appear together, even if not always simultaneously [28] (p. 2). By communicating multimodally we cater for a more whole image of what we want to express. It is also recognised that different modalities have different qualities, potentials or restrictions, but that none should be considered to have more potential than the other, even though speech has traditionally been considered to have the greatest reach [28] (p. 3). This allows for the democratic inclusion of non-verbal modes of communication, such as sound descriptions, devoid of words (for example, a deafblind persons’ sound description of making Carelian pies, see Maarit Suominen’s video link in the Supplementary Materials).
Multimodality studies do not generally focus on the senses or the sensory modalities but rather focus on cultural and social aspects of modalities of communication [28]. In this study, we draw on the theory of multimodal communication and sensory ethnography because of the alternative ways our participants used to communicate their aesthetic experiences. They have changed their perceiving modality from visual to haptic, internally using technology to hear sound through Cis, and in this study, they have also changed modalities from haptic to voice communication. We here focus on the aspect of sounds perceived by CI users and the changing of the channels of sensory experience to another mode of communicating, namely from direct touch to immediate vocalisation and sounds.
It is generally understood that the human senses are interconnected and usually work in close relation to each other [29,30,31]. Pink [32], a visual anthropologist, has developed an ethnography that encourages the acknowledgment of multiple sensory experiences in research. She describes her sensory ethnography as “a way of thinking about and doing ethnography that takes as its starting point the multi-sensoriality of experience, perception, knowing and practice” [32] (p. 1). For example, the participants sensory experiences may in this methodology be re-imagined or self- experienced by the researcher, together with the participants, so that the researcher gains an embodied understanding of the participants experiences. In our study, as seeing and hearing researchers, we may not be able to experience the world through an impaired perspective, but we can involve a “native” deafblind person as an informant in our writing process when describing the workshop to support our observations. One of the researchers also has longitudinal experience of the deafblind culture.
In our case, the participants have an altered sensory experience as they cannot use the main distance modalities but only the sense of touch. Compared to the distant and objective visual, the haptic modality is always limited to the subjective touch area of the hand or the body. Touch cannot convey the whole at once but provides sections of the whole that need to be re-constructed in one’s mind to form a whole over time [33] (p. 12). Therefore, the haptic is explorative and constructive in nature as well as being intimate, personal and direct. The haptic experience is immediate, unmediated and temporary. When we stop touching, the image or the feeling of the experience will be gone in an instant unless we find ways of capturing it instantaneously. By mere touch we cannot perceive much information from a surface; we need movement to detect surface structures, orientations and material qualities. However, deafblind participants are extra sensitive to their environment and the haptic interfaces that are available. Due to sensory substitution, the deafblind condition raises haptic expertise and tactile working memory to expert levels [34]. Nicholas [34], a neuroscientist, has studied deafblind subjects and found that deafblind people are generally more experienced in recognising stimuli by active touch [34] (p. 17), their tactile memory is enhanced, and they have a superior tactile performance [34] (p. 18).
Fingertips, although extremely sensitive, cannot follow cavities in small figurines. Thus, some blind persons prefer to explore very small objects by putting these into their mouths, using the even more sensitive tongue to discover the object [10] (p. 3). Akner-Koler and Ranjbar have identified a particular haptic aesthetic sensitivity [10] (p. 3) that is about actively and physically exploring the properties of objects and emotional responses to them. Additionally, multimodal communication is highlighted in these special circumstances. The personal and expressive language a person uses might not change due to the deteriorating sense, but the methods and modalities of receiving and communicating with peers might change many times during a person’s lifetime [21].
We think there is a need for sound descriptions of haptic experiences. When the sound and sound description is self-produced, it is more familiar and more understandable to peers in their own community. It also embodies the experience of the event, and at a later point one can reconnect to that experience when replaying the sound description. In this way, the sound description creates a new embodied memory, and piece by piece a library of “experience memories” can be built up and stored that will have new mental images connected to them. This is similar to how visual people take pictures with a camera to look at later in order to re-experience the photographed events. In the following, we will present the research intervention we created in order to explore CI users’ use of self-produced sound descriptions of their haptic experiences of sculptures in an art gallery. We will first introduce the setting, the participants and the research methods. We then present our analysis and discuss the findings and implications for the CI user community.

2. Methods

The Association of Finnish Sculptors and the Finnish Deafblind Association organised a tactile art exhibition, in which a deafblind person selected the most “touchable” sculpture of the year, which was then awarded a “Most Touchable Sculpture” prize. The sculptures used in this research setting were selected from the 2016 exhibition at Galleria Art Kaarisilta in Sanomatalo, in central Helsinki (Figure 2), (see also this link for a video presentation of the exhibition http://areena.yle.fi/1-3785443).
The present research intervention was designed through the setting up of a voice therapy and sound workshop for CI users. We studied deafblind peoples’ exploration of three-dimensional sculptures by hand movements, and how they interpreted the shape of the sculpture by making sound descriptions of them. The participants’ voice and sound descriptions were recorded with a portable recorder and later edited using the Goldwave sound editor before being recorded in MP3 format. Figure 3 shows an example of how the sound descriptions are presented on the Finnish Deafblind Association website, the image of the sculpture is shown only at the end of the sound description in order not to disturb the impression of the sound description for sighted audiences. Figure 4 shows how a CI user listens to the sound descriptions through CI and a T-Loop, connecting his hearing device directly to the source.

2.1. Research Setting

The sculptures were displayed on three tables in three different rooms. A seven-hour long workshop by a blind music teacher was conducted with five deafblind CI users. The workshop was divided into two parts: in the first, the participants were introduced to their own voice production, whilst the second half concentrated on producing a sound description of a sculpture. All participants first explored all five sculptures blindfolded, using their hands and hand movement. Because some participants have some sight left, the blindfolding ensured that all participants were using only their haptic information.
Some preparation work was done with a voice coach with the participants in the workshop prior to the creation of the sound description process. The participants explored how to create sounds using their voices and body. There were three stages, first an exploration of voice, then exploration of body sounds and, finally, the combining of voices and body sounds together. It was noticed in the final part that some participants explored their voices creatively by making improvised and very artistic sounds.
Most of the participants were non-musicians and were not used to using their voices in this way. Most of the participants have been severely hearing impaired from birth, well before they received their CIs and therefore lacked self-confidence when using their own voices due to the fear of making incorrect sounds. This may be due to the lack of high frequency sounds, typical in Usher syndrome people, which the participants were not able to hear when using hearing aids before the advent of CIs.
All five participants, four men and one woman, have been using hearing aid devices such as behind-the-ear hearing aids (HABE) (Table 1) bilaterally since childhood from the age of three to seven years old. Without hearing devices, they are deaf or severely hearing impaired, and have experience of auditive information only through hearing devices (HD). All participants communicate through spoken language. All of them have Retinitis Pigmentosa, deteriorating visual impairment (VI) with a narrow vision field and night blindness. Most have some visual perception of the world around them or use their visual memory: One of the participants is blind, the others are partially sighted.
The first part of the workshop explored how the body can be warmed up using various breathing and vocal exercises. At first, the participants hesitated and showed some resistance to explore and use their voices. Additionally, the sound they perceived from the teacher took time to be interpreted through the hearing aid devices; for example, when using hearing aids, the sound perceived may be at a distance away from the persons’ body, whereas the sound perceived through CI may be closer to the body. Another factor to be considered is how this soundscape is received through hearing aid devices as it may be distorted, out of pitch or in tune depending on the individual’s residual hearing.
The next phase of the workshop was exploring movements and sound creatively together in order to project the individuals’ voice. Some participants responded well to this approach while others were unsure how to use their voices. However, as the workshop progressed, their confidence grew. On the question of dynamics, for example loud or soft sounds, these vary from person to person as they may have to adjust their hearing aid devices according to the room acoustic. For example, if the room is echoing, this can cause confusion in the individual’s sound perception. In the final part of the workshop, there was an opportunity for each participant to explore a theme idea which illustrated their vocal creativity, for example sounds from under the sea, making bubbles in water and deep vocalisation sounds. The results included some very interesting examples, some of which were humorous and creative.
The sculptures were then presented on a large round table. After examining all five sculptures, each participant selected one sculpture that they focused on. They then considered how to produce a sound description of that chosen object. Each participant used 10–30 min to plan and produce their sound and vocalisation. They were asked to imagine what kind of sound image they would create of the sculpture and they also rehearsed some sounds by themselves. When they felt ready, they called the workshop leader to them and they started the recording of their interpretations individually while the other participants were waiting in the next room. All described interpretations were made through a haptic exploration and this haptic information was interpreted to the language level (written description) and vocals and sounds (non-verbal description).
The workshop leader was available for the participants and provided supervision when needed. The workshop leader also audio-recorded each performance, edited them and put up the final recordings on the website of the Finnish Deafblind Association, with the permission of the participants.

2.2. Sound Descriptions of Haptic–Aesthetic Experiences

The five sound description cases lasted from 25 s to 1 min 20 s. All sound descriptions had a photograph of the art work at the end of the recording, lasting 3 s. The five sound descriptions include photographs, the names of the art work and the artist, the year of the work’s creation and information on video length. However, we recommend the reader of this paper to first listen to the sound description of the art piece without any image, as in this way the reader can better imagine the felt experience only, as do the blind/blinded people.
SOUND DESCRIPTION 1
Blinded participant 1 vocal and sound description.
Sound description of the sculpture “Beyond Presence”, Artist: Saana Murtti 2013, 0.25 s
SOUND DESCRIPTION 2
Blinded participant 2 vocal and sound description.
Sound description of the sculpture, “Pieni utelias” (Small curious), Artist: Kaisu Koivisto 2015, 1.20 s
SOUND DESCRIPTION 3
Blinded participant 3 vocal and sound description.
Sound description of the sculpture, “Poika ja pallo” (Boy and ball), Artist: Tarja Malinen 2013, 0.23 s
SOUND DESCRIPTION 4
Blinded participant 4 vocal and sound description.
Sound description of the sculpture, “Tanssija II” (Dancer II), Artist: Harri Kosonen 2016, 0.47 s
SOUND DESCRIPTION 5
Blinded participant 5 vocal and sound description.
Sound description of the sculpture, “Nuotio” (Camp fire), Artist: Antti Keitilä 2016, 1.00 s
We also invited an independent deafblind person to give us an outsider’s text description of the same art pieces. The person used spoken Finnish language and received Finnish hands-on sign language. When the person was giving their verbal description, their speech was noted down by a note taker. This text was sent to the participant by e-mail, which was later edited, giving a shortened descriptions of each of the five sculptures. In this study, we utilised verbal descriptions of the same sculptures as a reference to compare the non-verbal descriptions with. In this article, below, we display only one of these shortened descriptions rather than all the longer original versions, translated from Finnish into English, however all the full text descriptions are also available on the same website as the sound descriptions (in Finnish) http://www.kuurosokeat.fi/kommunikaatio/tuntokuvailu.php.

3. Results and Discussion

The collection and analysis of the data was carried out by one of the authors, who also participated in the workshop as an organiser and assistant. Although she is hearing and sighted, she can be considered a “native” sensory ethnographer in this context. She has many years of experience of working with deafblind people and through marriage she is also sharing a deafblind persons’ everyday life. As an interpreter of the deafblind perspective, she is as well-suited as possible for anyone not sharing this very particular condition. In her interpretation of this data she is therefore able to use her experiential and long-term personal knowledge in her role as researcher. The participants’ verbalised statements of their experiences provide us with only a tiny part of all the dimensions of the event, but through the interpretation of the researcher these are reflected through the theoretical lens as well as through an overall understanding of the multisensory experience and multimodal communication of the participants.
As a frame for the text-based description analysis, we used the model by Lederman and Klatzky [35] concerning hand movements during haptic object recognition. This model includes the notions of sensing of pressure, vibration, temperature, postures and movements [35] (p. 121) but has also been extended by Akner-Koler and Ranjbar [10] (p. 4). In this article, we further developed the model to include the notion of amount and orientation. For the vocalisation and sounds, we utilised a qualitative content analysis based on the sounds produced, means of sound production, type of description, pitch and volume.

3.1. Text Description

The independent deafblind person who gave us the text descriptions is blind and not using any hearing devices, which means there is no useful hearing available. The person has many decades of experience of receiving different kinds of environmental descriptions by interpreters, personal assistants and friends. In everyday life, the person mainly uses the tactile sense which constitutes the main channel for receiving information. The person has thus developed a haptic aesthetic sensitivity for tactile differentiations and was able to utilise this expertise in the written production of a haptic description. The person is also very verbally talented and has an analytical disposition.
The text descriptions included material and texture, their shape, size or thickness, temperature, and weight as elaborated in a previous version of this study [36]. These features have been recognised by Akner-Koler and Ranjbar [10] and Klatzky and Lederman [37]. However, in addition to these familiar aspects in haptic research, we also detected the properties of amount, orientation and mental image in the text descriptions (see Table 2). As an example, we here show the fourth text description made by the deafblind person:
‘This fragile sculpture is like a metal wrapping that swirls upwards. The outside surface of the wrapping is smooth in a rough way but the inside is spiky. The wrapping is shaped like a tube that fits well inside a hand. Some parts of the sculpture extend above the tube- shaped wrapping, and above it there is a nob or a head’.
This text shows similarities to the sound description made by participant 4 (see Table 3).
Both informants described the different materials, shapes/forms and surface structure of all the sculptures. The thickness, amount and weight of the objects were also expressed. In the written text, there were also descriptions of different parts of the sculpture and their relationships, that is, their orientation to each other (swirls upwards).

3.2. Vocal and Sound Descriptions

A text description might be repeated similarly using similar words by other people later, but a sound reproduction is individual. The sound description captured a momentary experiential notion, experimenting with the sounds the participants could produce. In comparison, the text description took a much longer time to produce and allowed the participant to read and correct it over time. In addition, mental images were created by analogy in both text and sound descriptions where the participants had had sight and thus visual memory.
Participants 1 and 4 did not have any visual image of a subject matter of the sculptures, in other words they have been visually impaired since early childhood. Participant 1 produced sounds by using their own voice and hands, sometimes at the same time describing the sculpture’s material and surface texture and quantity. Participant 4 mentioned that the sculpture did not evoke any predefined real image when exploring it, but the whooshing sound was influenced by the participant’s perceived mental image of a hollow pipe. The sounds produced included whispering sounds, building up into a storm that turned into a howling wind sound. The howl (storm) sounds like a meditative humming that is building up into a loud dramatic climax, “a deathly cry”. These sounds were produced by their own voices, blowing in and out, starting at a low pitch and then moving through a middle- to a higher-level pitch (Table 3).
One of the participants, participant 4, has formal training and education in music therapy. The participant also has a long background as a composer, song writer and singer, with concerts in arenas such as the Finlandia Hall in Helsinki. The participant described the artistic process of making the sound afterwards as follows;
‘Since I was not able to see the sculpture, I used my hands as a basis to explore and to create improvised sounds using my voice. Working from the base level of the sculpture, my hands moved around the shape upwards towards the top. As I was doing this, I also created my own improvised sounds through blowing and breathing heavily like a rushing wind sensation. At the same time as my hands explored the hollow windpipe-like structure, I started to create a vocalised variable humming sound which got louder and louder and increased in pitch level at the same time. This crescendo built up to a deathly cry at the end which was influenced by the sharp jagged shape of the sculpture at the top’.
As participant 4 described the shape of the sculpture, he synchronised his voice with the movements of his hands. The shape influenced the vocalising of the pitch level from low to high with a dramatic ending as the hands found the sharp jagged shape of the sculpture at the top. The participant did a creative, improvised soundscape, which was more varying and longer compared to some of the others.
Participants 2, 3 and 5 had a clear and predefined mental image of the themes of the sculptures, which are of a dog, a meditating person and a camp fire, as they had vision before becoming blind. Participant 2 produced a dog sound using their own voice and was able to relatively easily produce this because they were familiar sounds (they had experience of hearing), therefore imitating it was possible. The dog-shaped sculpture was thus described by a dog-like barking and exploring through sniffing (sniff) and growling (grr).
Participant 3 had an image of a person sitting down holding a ball in his hands. This is described by a “thinking, humming, meditating” sound. The meditative sound was relatively easy to produce in a simple and musical manner, producing two pitch levels. Participant 5 had a predefined image of a campfire and had had personal experience of being at a campfire before. The participant was touching the sculpture using hands while wearing a ring on the finger, thus creating a rustling and clicking sound effect. Wood as a material is very responsive and you can feel the vibration. The participant was creating different sounds through the mouth, such as clicking and a wind sound effect (blowing in and out). No vocalisation sounds were produced in this description.
In general, the sound descriptions were combinations of different creative hand and vocalised sounds which had different levels of volume. Three participants of the five (2, 3, 5) had a clear idea of the subject, that is, a clear mental image of the subject matter based on their haptic exploration and previous experience. Their sound descriptions thus mimicked the sound of the subject. However, there is no agreed vocabulary for sound descriptions. Non-vocalised sound descriptions are individual, personal experiences based on touch and hand movements of the art work. When compared to the description in written format, the words used have a certain learnt meaning and are based on linguistic grammar. A similar type of text could be produced by another person, because haptically we would pick up the same kinds of elements, such as material, temperature, size, etc. (see [10,17,35]).
We analysed how the sound descriptions represented inner experiences of haptic exploration (the mental image of sculpture) and were re-interpreted by each participant’s own voice and body sounds, sometimes in combination with sounds made by touching the art work. These sounds describe the participants’ instantaneous experiences as their hands touch one point of the object, moving to the next point, discovering the material and size. They did not have a unified agreed tone symbol in use. The sound descriptions can be described as short, experimental and playful sound effects that these participants tried out for the first time in their life. We experience these sound descriptions as personal artistic productions by the participants and unique in themselves but that may be related to on a general level.
The sound descriptions were instructed by a born-blind music teacher, educated at the Sibelius Academy in Helsinki, Finland; they were designed and thought through by the participants, rehearsed and performed sincerely. The participants also reported being empowered by the experience of overcoming their shyness and trusting their ability to create new expressions by utilising their voices in new ways. This led to the idea that the creation of sound descriptions could be also utilised in adult CI rehabilitation, where speech and voice are used. Participant 4 said:
‘The only rehab I got was how to pronounce sounds, but I did not get any therapy where I could listen to my own sounds through CI or any experience of using my own voice or using my breath in the right way. Many deaf people are afraid to use their voices and levels of dynamics as they are not able to judge the intensities or volumes.’
Many of the participants extended their voice production to include adapted sounds, also using hand clapping and drumming and tapping on the sculptures. In this way, they started a communicative process with the sculpture, one in which the sculpture was also brought into the act of sound making. The sounds were not only human but also came from the sculpture itself. If we understand the aesthetic experience as wider than only the distant visual or auditive experience, this way of experiencing may even enhance the aesthetic experience of an object. These participants have a bodily communication with the object that allows them to take it into their physical realm and “play it as an instrument” through a haptic aesthetic exploration. This sense of play also triggered humorous aspects and joy. Further, it provided the opportunity to dwell on the art piece and to take time with it, to indulge in it and to participate in it.
This haptic aspect of the aesthetic experience is seldom generally available. This is also apparent in the poor vocabulary for tactile experiences [13]. The are many colour descriptions of various shades of red, for example, but few shades of softness or hardness. In the sound descriptions, the participants were able to “visualise” their experiences with intonation and volume as well as multiple simultaneous sounds giving a kind of three-dimensional view of the sculpture. In this sense, the sound descriptions were more multi-faceted than mere words, linking to the theory of multimodal communication becoming enriched by multiple modes of expression. The sound descriptions were describing the landscapes of the sculpture surfaces, giving time-based narratives of the journey along the surface. Participant 4 described the recording as helping to create new memories of the sculpture “While making the sound description I created my own story that helped me to remember the sculpture’s shape”. Naturally, these embodied memories are important for people who cannot outsource their memory to photographs.
When participant 4 was asked about what mental image he had of the sculpture one and a half year later, the participant described the hollow pipe with hand movements swirling up and the top of the sculpture, that earlier was mentioned as the “deadly cry” as a cry for “help”. The experience was re-lived and the participant expressed the same severity and despair.

4. Conclusions

Ultimately, in this study, we analysed five deafblind people’s experiences of five sculptural art pieces and their textual and sound-based interpretations, focusing on a closer analysis of one of the participants. We found that the participants were able to cross over their sensory experiences from one sensory modality to another, by using their voices to express what they felt haptically. Thus, the participants found multimodal ways of expressing their experiences, including utilising the sounds produced by “playing” the art pieces. Their haptic sensory modality proved able to provide sufficient multifaceted information for a qualitative expression of an aesthetic experience.
For these CI users, an active interaction with art was made accessible through this workshop and the exhibition. This small experimental group showed us how art can be experienced by different modalities than only the visual. The artistic sound descriptions make the sculptures accessible to the seeing and hearing population in a haptic-based format. This type of sound description could also be used in the context of art descriptions for blind and deafblind persons as an additional support to the textual/audio descriptions in art galleries.

Benefits of this Research

The practical implications of our research point to self-produced sound descriptions as valid means to enhance life quality and CI users’ experience of art related events. Technological devices can be utilised to record the interpretation of a piece of art so that it can be replayed using different solutions. In this way, it is like a self-evaluation or self-creation process in time. Self-produced descriptions also allow for more familiar and comprehensive descriptions for one’s own use and that of others.
The method of making sound descriptions and listening to one’s own voice can also be utilised in adult rehabilitation, for example some CI users have difficulties in controlling their breathing and their tone of voice while speaking. When recording the sounds, the CI person can get used to their own voices and the use of tools and instruments to practice sound making. Over time they could become braver and be encouraged to trust their voices and sound making and feel empowered rather than humiliated. We also think that CI technology in combination with self-made sound descriptions enhance the memorization of haptic art experiences that can be recalled by the recording of the sound descriptions. This expands the idea of auditive descriptions, and encourages user-produced descriptions as artistic supports for traditional linguistic, audio descriptions, as they can be used to create personal auditive–haptic memory collections, similar to photo albums.
A person who cannot hear their own voice while speaking needs to learn what their voice “feels like” when it is produced, and they need training through speech and body therapy. At the moment, this is not provided to adults. CI perception is more than only speech or music perception. One’s own vocalisation implies bodily features and also sounds induced by using objects for creating sounds.
In our study, the participants learned to use their voices more consciously, and more confidently, trusting their voices to be a valid means for expressing felt experiences. They experienced the feeling of making sounds in their bodies. Although technological aids have become inevitable in this user group’s everyday life, this experience of making and listening to sounds through their own body makes their being more alive than making sounds through technological aids.
In their everyday lives, CI users are modifying technology to better suit their needs. Participant 4 said, “When playing back one’s own sound description using the CI or hearing aid device, it may be appropriate to use a hat to recreate and copy the original recording using ones’ own voice.” The hat enables the CI user to hear their own voice in a much clearer way. Furthermore, the hat allows the person to judge and adjust the dynamics of their own voice.
Thus, one point we are making here is that the technology available is technology-centred and the users modify it to suit their needs better and to make it work for them. The modifications and self-made solutions and sounds create an added sense of ability and self-fulfilment. These adjustments also aid in the seamless use of the technology and naturalise the non-human aspects and notions. It is important that the user is using the technology as a tool rather than becoming reliant on it. When the CI user can produce an expression of a cultural event, that can be shared with others, and reproduced in a form that can be comprehended by peers, there is a sense of achievement. The position of mainly being a receiver of communication can be changed into one producing and directing the form of communication.
As limitations of this research we should mention the small number of participants and the restrictions that this puts on the generalization of our results. Additionally, the group of participants have very individual experiences due to their varied sensory abilities and their individual deteriorating condition. In our future research, we aim to study this setting, also including video recordings, to be able to analyse fundamental haptic ways of expressing shape in vocalisations and sounds. Aided by video analysis of the exploration of shape and size and other haptic features, for example basic shapes such as the cube, sphere or pyramid, we hope that the phonetic analysis of vocal sounds could produce more knowledge of a possible shared “shape-based grammar”. Additionally, the auditive–haptic sound descriptions could also be adapted to develop mobility and environmental orientation aids, indicating distances and height differences for deafblind people and individuals with memory deficiencies.

Supplementary Materials

The following sound description videos are available online. Video S1: https://m.youtube.com/watch?v=zSJ0megPD3I; Video S2: https://m.youtube.com/watch?v=5LBQYTUlnqU; Video S3: https://m.youtube.com/watch?v=tetUL7XkAUA; Video S4: https://m.youtube.com/watch?v=MK2WBtOBwLo; Video S5: https://m.youtube.com/watch?v=wmI1ZpGjxj4; Video S6: Video presentation of the exhibition by Areena: http://areena.yle.fi/1-3785443. Cochlear Implants: http://www.medel.com/us/cochlear-implants. Nordic definition of deafblindness: http://www.kuurosokeat.fi/tiedosto/nordic_definition.pdf. Link to artistic sound descriptions of baking Karelian pies: http://www.av-arkki.fi/en/works/minispectacles-as-nice-as-pies_en/. Finnish Deafblind Association link to the sound descriptions: http://www.kuurosokeat.fi/kommunikaatio/aanikuvailu.php.

Author Contributions

R.L. conceived, designed and performed the experiments; R.L. and C.G. analysed the data; R.P. contributed with content; R.L., C.G. & R.P. wrote the paper together.

Acknowledgments

We would like to thank all the participants of the research as well as the artists who made the art pieces: Saana Murtti, Kaisu Koivisto, Tarja Malinen, Harri Kosonen, Antti Keitilä and the photographers of the art pieces shown on the website: Tuija Wetterstrand and Harri Kosonen. We also thank the voice coach Riikka Hänninen and the anonymous deafblind text interpreter. We would like to acknowledge the Finnish Deafblind Association and their website, where you can find all the audio video and text descriptions. This article has been developed from an earlier conference paper version: “Haptic Art Experiences Described as Vocals, Sounds and Written Words by Deafblind.” Proceedings of the Art of Research Conference, 28–29/11 2017 Helsinki, Finland.

Conflicts of Interest

The authors declare no conflict of interest. One of the authors is also the informant for this research. However, this person was not involved in analysing the data for the research. Instead he has provided his insider knowledge about the general CI user experience as a professional musician. In the writing process he reflected on his experiences and user experience. He also wrote parts of the text on technologies available for CI users.

References

  1. Pallasmaa, J. The Eyes of the Skin; John Wiley & Sons: Chichester, UK, 2005. [Google Scholar]
  2. Pallasmaa, J. The Thinking Hand; John Wiley & Sons: Chichester, UK, 2009. [Google Scholar]
  3. Vermeersch, P.; Nijs, G.; Heylighen, A. Mediating artifacts in architectural design: A non-visual exploration. In Proceedings of the CAAD Futures: Designing Together, Liege, Belgium, 4–8 July 2011; Les editions de l’Universite de Liege: Liege, Belgium, 2011; pp. 721–734. [Google Scholar]
  4. Heylighen, A.; Nijs, G. Designing in the absence of sight: Design cognition rearticulated. Des. Stud. 2014, 35, 113–132. [Google Scholar] [CrossRef]
  5. Herssens, J.; Heylighen, A. Challenging architects to include haptics in design: Sensory paradox between content and representation. In Designing Together—CAAD Futures 2011; Leclercq, P., Ed.; Les Editions de L’Universite’ de Liege: Liege, Belgium, 2011; pp. 685–700. [Google Scholar]
  6. Heylighen, A.; Bouwen, J.; Neuckermans, H. Walking on a thin line. Des. Stud. 1999, 20, 211–235. [Google Scholar] [CrossRef]
  7. Shinohara, K.; Tenenberg, J. A blind person’s communication with technology. Commun. ACM 2009, 52, 58–66. [Google Scholar] [CrossRef]
  8. Groth, C.; Mäkelä, M.; Seitamaa-Hakkarainen, P. Making Sense—What can we learn from experts of tactile knowledge? FORMakadem. J. 2013, 6, 1–12. [Google Scholar] [CrossRef]
  9. Groth, C. Making Sense through Hands: Design and Craft Practice Analysed as Embodied Cognition. Doctoral Thesis, Aalto Arts Books, Helsinki, Finland, 2017. [Google Scholar]
  10. Akner-Koler, S.; Ranjbar, P. Integrating Sensitizing Labs in an Educational Design Process for Haptic interaction. FORMakadem. J. 2016, 9, 1–25. [Google Scholar] [CrossRef]
  11. Merleau-Ponty, M. Phenomenology of Perception; Routledge: London, UK, 2010; (Original English Version Published in 1945). [Google Scholar]
  12. Ingold, T. Culture on the ground. J. Mater. Cult. 2004, 9, 315–340. [Google Scholar] [CrossRef]
  13. Mcpherson, H. Articulating blind touch: Thinking through the feet. Senses Soc. 2009, 4, 179–194. [Google Scholar] [CrossRef]
  14. Harrison, P. Making sense: Embodiment and the sensibilities of the everyday. Environ. Plan. D Soc. Space 2000, 18, 497–517. [Google Scholar] [CrossRef]
  15. Paterson, M. Haptic geographies: Ethnography, haptic knowledges and sensuous dispositions. Prog. Hum. Geogr. 2009, 33, 766–788. [Google Scholar] [CrossRef]
  16. Gallace, A. Living with touch. Psychol. Br. Psychol. Soc. 2012, 25, 896–899. [Google Scholar]
  17. Gibson, J.J. The Senses Considered as Perceptual Systems; Greenwood Press: Westport, CT, USA, 1983; (Original Work Published in 1966). [Google Scholar]
  18. Torppa, R. Pitch-Related Auditory Skills in Children with Cochlear Implants: The Role of Auditory Working Memory, Attention and Music; Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioral Sciences University of Helsinki: Helsinki, Finland, 2015. [Google Scholar]
  19. Zatorre, R.J.; Baum, S.H. Musical Melody and Speech Intonation: Singing a Different Tune? PLoS Biol. 2012, 10, e1001372. [Google Scholar] [CrossRef] [PubMed]
  20. Maarefvand, M.; Marozeau, J.; Blamey, P.J. A cochlear implant user with exceptional musical hearing ability. Int. J. Audiol. 2013, 52, 424–432. [Google Scholar] [CrossRef] [PubMed]
  21. Prause-Weber, M.; Schraer-Joiner, L. Cochlear implants can help patients enjoy listening to and making music. Hear. J. 2007, 60, 60–63. [Google Scholar] [CrossRef]
  22. Lahtinen, R. Haptices and Haptemes—A Case Study of Developmental Process in Social-Haptic Communication of Acquired Deafblind People. Ph.D. Thesis, University of Helsinki, Helsinki, Finland. Cityoffset Oy: Tampere, Finland, 2008.
  23. Palmer, R.; Ojala, S. Feeling music vibrations—A vibrosensoric experience. In Proceedings of the BNAM2016, Stockholm, Sweden, 20–22 June 2016. Paper 53. [Google Scholar]
  24. Everett, J.; Gibert, W. Art and touch: A conceptual approach. Br. J. Vis. Impair. 1992, 9, 87–89. [Google Scholar] [CrossRef]
  25. Nilsson, M.; Schenkman, B. Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences. Hear. Res. 2016, 332, 223–232. [Google Scholar] [CrossRef] [PubMed]
  26. Tuominen, H.T.; Palmer, R.; Korhonen, I.; Ojala, S. Evidence-based study on performance environment for people with and without cochlear implants (CI). In Proceedings of the BNAM2014 Conference, Tallinn, Estonia, 2–4 June 2014; ISBN 978-87-995400-1-3. [Google Scholar]
  27. Lahtinen, R.; Palmer, R.; Lahtinen, M. Environmental Description for Visually and Dual Sensory Impaired People; Art-Print Oy: Helsinki, Finland, 2010. [Google Scholar]
  28. Jewitt, C.; Bezemer, J.; O’Halloran, K. Introducing Multimodality; Routledge: London, UK, 2016. [Google Scholar]
  29. Pink, P. A multisensory approach to visual methods. In The SAGE Handbook Visual Research Methods; Margolis, E., Pauwels, L., Eds.; SAGE Publications: London, UK, 2011; pp. 601–615. [Google Scholar]
  30. Gallace, A.; Spence, C. The science of interpersonal touch: An overview. Neurosci. Biobehav. Rev. 2010, 34, 246–259. [Google Scholar] [CrossRef] [PubMed]
  31. Shifferstein, H.; Wastiels, L. Sensing materials: Exploring the building blocks for experiential design. In Materials Experience: Fundamentals of Materials and Design; Karana, E., Pedgley, O., Rognoli, V., Eds.; Elsevier: Oxford, UK, 2014. [Google Scholar]
  32. Pink, S. Doing Sensory Ethnography; SAGE Publications: London, UK, 2009. [Google Scholar]
  33. Keller, H. The World I Live in; Hodder & Stoughton: London, UK, 1908. [Google Scholar]
  34. Nicholas, J. From Active Touch to Tactile Communication: What’s Tactile Cognition Got to Do with It? The Danish Resource Centre on Congenital Deafblindness: Aalborg, Denmark, 2010. [Google Scholar]
  35. Lederman, S.; Klatzky, R. Hand movements: A window into haptic object recognition. Cogn. Psychol. 1987, 3, 342–368. [Google Scholar] [CrossRef]
  36. Lahtinen, R.; Groth, C.; Palmer, R. Haptic Art Experiences Described as Vocals, Sounds and Written Words by Deafblind. In Proceedings of the Art of Research Conference, Helsinki, Finland, 28–29 November 2017. [Google Scholar]
  37. Klatzky, R.L.; Lederman, S.J. Toward a computational model of constraint-driven exploration and haptic object identification. Perception 1993, 22, 603–604. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A deafblind CI user deflecting unwanted sounds by the use of a hat. Photographer: Riitta Lahtinen.
Figure 1. A deafblind CI user deflecting unwanted sounds by the use of a hat. Photographer: Riitta Lahtinen.
Mti 02 00024 g001
Figure 2. A deafblind visitor at the exhibition at Sanomatalo, 2016. Photo: The Finnish Deafblind Association Archive.
Figure 2. A deafblind visitor at the exhibition at Sanomatalo, 2016. Photo: The Finnish Deafblind Association Archive.
Mti 02 00024 g002
Figure 3. Sound description number 1 as it is presented on the website. Photo: The Finnish Deafblind Association Archive.
Figure 3. Sound description number 1 as it is presented on the website. Photo: The Finnish Deafblind Association Archive.
Mti 02 00024 g003
Figure 4. Example of listening to a sound description, through CI and T- Loop: Photographer: Riitta Lahtinen.
Figure 4. Example of listening to a sound description, through CI and T- Loop: Photographer: Riitta Lahtinen.
Mti 02 00024 g004
Table 1. Participants’ background information, including their visual and hearing impairment.
Table 1. Participants’ background information, including their visual and hearing impairment.
Participant 1Participant 2Participant 3Participant 4Participant 5
GenderMaleMaleMaleMaleFemale
Age5854342241
VIBlindVIVIVIVI
HD 20162 CI (2004, 2010)2 CI (2011, 2012)CI (2016, 2017)2 HABE (2015)2 CI (2012, 2014)
First HD4.5 years old3 years old4 years old7 years old7 years old
Table 2. Deafblind person’s text of the fourth description according to the developed version of the analytic frame of Akner-Koler and Ranjbar, 2016.
Table 2. Deafblind person’s text of the fourth description according to the developed version of the analytic frame of Akner-Koler and Ranjbar, 2016.
Properties of Interpreted Haptic ExperiencesElements of Text Description of Sculpture 4
MaterialMetal
ShapeSwirl, nob, head
Size, thicknessFits inside hand
Amount
Temperature
Weight
TextureSmooth, rough way, spiky
OrientationSwirls upwards, outside surface, above
Other, such as mental imagesFragile, shaped like a tube, wrapping
Table 3. Vocal and sound descriptions by participant 4.
Table 3. Vocal and sound descriptions by participant 4.
Analysis of vocal and sound descriptionParticipant 4, Elements of Sound Description of Sculpture 4
Produced soundWhim…woosh howl…AAAH
Means of sound productionVoice, blowing
Type of descriptionShape
Other such as pitch and volumeLow, middle, higher pitch level

Share and Cite

MDPI and ACS Style

Lahtinen, R.; Groth, C.; Palmer, R. Sound Descriptions of Haptic Experiences of Art Work by Deafblind Cochlear Implant Users. Multimodal Technol. Interact. 2018, 2, 24. https://doi.org/10.3390/mti2020024

AMA Style

Lahtinen R, Groth C, Palmer R. Sound Descriptions of Haptic Experiences of Art Work by Deafblind Cochlear Implant Users. Multimodal Technologies and Interaction. 2018; 2(2):24. https://doi.org/10.3390/mti2020024

Chicago/Turabian Style

Lahtinen, Riitta, Camilla Groth, and Russ Palmer. 2018. "Sound Descriptions of Haptic Experiences of Art Work by Deafblind Cochlear Implant Users" Multimodal Technologies and Interaction 2, no. 2: 24. https://doi.org/10.3390/mti2020024

Article Metrics

Back to TopTop