Next Article in Journal / Special Issue
Automatic Transcription of Polyphonic Vocal Music
Previous Article in Journal
Simultaneous Extraction, Enrichment and Removal of Dyes from Aqueous Solutions Using a Magnetic Aqueous Micellar Two-Phase System
Previous Article in Special Issue
Optimization of Virtual Loudspeakers for Spatial Room Acoustics Reproduction with Headphones
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effects of Musical Experience and Hearing Loss on Solving an Audio-Based Gaming Task

by
Kjetil Falkenberg Hansen
1,*,† and
Rumi Hiraga
2,†
1
Sound and Music Computing, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Lindstedsvägen 3, 11428 Stockholm, Sweden
2
Industrial Technology Department, Tsukuba University of Technology, 305-8520, Tsukuba, Japan
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2017, 7(12), 1278; https://doi.org/10.3390/app7121278
Submission received: 23 October 2017 / Revised: 2 December 2017 / Accepted: 5 December 2017 / Published: 10 December 2017
(This article belongs to the Special Issue Sound and Music Computing)

Abstract

:
We conducted an experiment using a purposefully designed audio-based game called the Music Puzzle with Japanese university students with different levels of hearing acuity and experience with music in order to determine the effects of these factors on solving such games. A group of hearing-impaired students (n = 12) was compared with two hearing control groups with the additional characteristic of having high (n = 12) or low (n = 12) engagement in musical activities. The game was played with three sound sets or modes; speech, music, and a mix of the two. The results showed that people with hearing loss had longer processing times for sounds when playing the game. Solving the game task in the speech mode was found particularly difficult for the group with hearing loss, and while they found the game difficult in general, they expressed a fondness for the game and a preference for music. Participants with less musical experience showed difficulties in playing the game with musical material. We were able to explain the impacts of hearing acuity and musical experience; furthermore, we can promote this kind of tool as a viable way to train hearing by focused listening to sound, particularly with music.

1. Introduction

Musical experiences affect persons with hearing loss and hearing persons similarly. Hence, music can provide similar benefits to both groups [1]. However, it is well known that people with hearing impairment listen to music much less. This can be seen, for example, when comparing individuals before and after cochlear implantation [2]. It is also established that even people with only mild or moderate hearing impairment exhibit language disorders [3].
In order to increase the likelihood of people with hearing loss having enjoyable listening experiences, we believe that one solution is exposure to activities involving focused listening. Hearing persons focus on the sound itself when they listen to music (musical listening) while they also pay attention to the source or the situation of the sound (everyday listening) [4]. We use “focused listening” for people with hearing loss so that they may listen to sounds, noticing the change along time with its pitch, timbre, and other sound features as hearing persons do.
Thus, playing an audio game where attention to music is required to solve the task—such as one that can be played casually to entertain—would promote actively listening to music. In turn, this voluntary exposure to sound supports language acquisition and development [5], personal development and social grooming [6], and the ability to extract information from the coincidental and surrounding sounds of everyday life [7].
In previous studies, authors and colleagues presented an audio game called the Music Puzzle as well as preliminary results from pilot testing (see for instance [8,9]); in the current work, we investigate specifically how hearing acuity and musical experience can impact game-playing achievements. The experiment involved both hearing and hearing-impaired university students.
Hearing impairments, hearing loss, and hearing acuity are closely related terms. Hearing loss is, according to several definitions, “a general term that refers to a reduced auditory acuity” [10]. Auditory acuity is also well defined and “describes how sensitive the auditory system is to sound” [11]. Hearing acuity measured through audiometry will not determine to what extent the person listens to or likes music. Actually, there are many accounts of professional artists with hearing loss who earned great success in music, such as Evelyn Glennie [12], Paul Whittaker [13], and Danny Lane [14]. Another example is the world-touring Gallaudet Dance Company [15], where the members are university students with hearing loss.
Organizations and teachers in different countries manage activities related to teaching music to children with hearing loss (for instance, Music and the Deaf [16], and hear ME now [17]), and to experiencing music (for instance, initiatives by the Mahler Chamber Orchestra [18]). There are reports on how to accommodate music activities for the hearing-impaired [19], and music education for the hearing-impaired is furthermore an active research area, such as in teaching orchestral music [20].
Even without personal music training or special music activities, many young people with hearing loss enjoy music actively, through dancing, going to karaoke, watching artist promotion videos, playing the drums, or just listening to music. Many of them also like to play music games either on computers or mobile devices, or at video arcades. It has however been shown [21] that interpretation of the communicated emotions in music (arguably music’s most important characteristic) is significantly less precise in the hearing-impaired compared to typical listeners, partly due to problems of timbre and pitch perception.
Familiarity with music, gained from exposure, will increase emotional engagement in listening [22]. Also concerning motivations for engaging in musical activities, it was found [23] that although motivations for the hearing-impaired were similar to those of the hearing population, the degree of early exposure to music has an impact on music-making later in life. Musical experiences have also been documented as having positive effects and providing benefits for hearing subjects, for instance, related to language acquisition [24], social interaction [25], and auditory skills in different aspects [26,27,28,29,30]. Without focused listening, the same benefits for language acquisition cannot be achieved [31].

1.1. Hearing Loss, Music Listening, and Music Training

Studies on the relationship between hearing loss and music listening have been performed within several areas. In music therapy, the positive effects of musical interventions on children with hearing loss have been described [32,33]. Much of the recent research on music with hearing loss has focused on the emerging technologies related to cochlear implants, while some studies look particularly at hearing loss with just hearing aids, such as in the description of how people with hearing aids listen to music from an audiological perspective [34,35]. Music perception by cochlear implant users has been observed both by otolaryngology laboratories [36,37,38] and by psychologists’ groups [39,40].
Music perception by people with hearing loss has also been explored from various perspectives: which music elements to use in an experiment, ways to propose music, benefits of cochlear implants and hearing aids, and the age and impairment history of participants. Experiments related to the perception of pitch vary from basic pitch discrimination tasks [41] to memorization [36], singing [1], and recognition [40] of melodies. Experiments related to exploring the role of temporal information for melody recognition have included both tempo and rhythm as well as pitch information [42]. It has been shown that pitch and timbre—when parametrically varied in a synthesized tone signal and with music listening history accounted for—interfere and confuse listeners in discrimination tests [43]. Potentially, musical training can improve timbre perception and identification in cochlear implant patients [44].
Musical training is typically given to people with hearing loss either in the long or short term. Various experiments have investigated possible long-term effects of informal music activities provided to participants at schools [1], and measured the effectiveness of long-term music lessons for improving the perception of environmental sounds [45]. In short-term training for cochlear implant users, little progress was found in terms of music skills [46]. Effects of the training were found in linguistic identification tests after controlled training of combining the acoustic information of a hearing aid with the electric information from a cochlear implant [47]. Some of these results are related to brain development [1,46].

1.2. Games for Training and Special Support, and Audio-Based Games

In recent years, needs for training and skill practice have been studied through so-called serious games or games for learning. Such games have been shown to be both effective and motivational [48,49], and they are generally applied in any type of context. The design of games for persons with physical or cognitive impairments, for instance, does not only have the purpose of giving them opportunities of playing entertaining games, but is also intended to improve logical thinking, cognitive skills, or social skills. Games for children with autism spectrum disorder have, in different studies, been shown to support the development of social skills such as membership, partnership, and friendship [50,51].
For auditory training such as exposing oneself to focused listening, it is reasonable to expect that serious games based on sound would be appropriate. Audio-based games are common both among serious games and among games only for entertainment. However, they differ greatly in design, gameplay, and functionality [52,53]. In particular, there are many examples of such games that have been developed for people with visual impairments and that can be played entirely without a graphical user interface [54,55]. Additionally, there are many general music tutoring games that practice specific skills such as solfège, rhythm, melody, and notation [56,57,58]. Games for training listening for the hearing-impaired are less common, although some specialize in cochlear implants [59,60].
The above and many other games provide promising interfaces for gameplay involving solving specific musical tasks, or for training in supplementary modalities for the impaired, which is predominantly visual. Instead of adapting these games to sound discrimination training for the hearing-impaired, we suggest methodically focusing on the impaired auditory sensory organ using an alternative game design. The game design is based on focused listening with an elementary graphical interface.

1.3. Aim of the Study

For our studies, we have developed an audio-based game with a simple graphical user interface that provides no visual cues to help solve the game. The game includes musical material, speech in terms of read poems, and mixes of those materials. It is intended that people with hearing loss use focused listening in order to win. We conducted an experiment to explore if the game can be used in auditory training and engaged three participant groups that differed in measured hearing acuity and self-reported music experiences; this way we could investigate the impact these factors have on game playing, but also the impact of speech and language ability since this correlates with hearing acuity.
In addition, we were interested in finding out how the game is played, what makes it enjoyable, and if music is a preferred material in auditory training. In order to resemble an everyday listening situation, the participants played with headphones, and not with e.g., Bluetooth bridging for hearing aids. They adjusted the volume of both hearing aids and the game sounds to their typically preferred level; this way, we explored the impact of their hearing relative to their typical listening conditions.
If the game is appreciated among the experiment participants, it is ready to be used as a formal and informal training tool for a wide group of the hearing-impaired. On the whole, we would be able explain the impact of the above factors, and therefore recommend considering this kind of tool in the future training of hearing.

2. The Music Puzzle

The developed game, called the Music Puzzle, has a gameplay which resembles that of a classic jigsaw puzzle. It is developed for Android devices with touchscreen and uses a Pure data real-time audio engine library [61] (see also [8,9]). Our initial idea was to use music, which means that the purpose of the game is to recompose a musically correct piece of music from fragmented parts of a recording. However, the game is not restricted to music, but can use any audio recording.
The complete puzzle to be solved is represented by a ball on the screen, and pressing this ball will play the corresponding sound file (see Figure 1). Then, this ball is divided into smaller pieces or “sound objects” (to be explained shortly) with an identical appearance and they are randomly distributed in the graphical user interface. These smaller puzzle pieces or sound objects can be reassembled into a complete whole. The sound objects can furthermore be manipulated in pitch, and filtered by equalization (from here on referred to as EQ); they will, like the bigger ball, play the linked sound upon being pressed. For a video example of the game, please see the supplementary materials.
The player hears the entire puzzle to be solved once, then proceeds to reorder the pieces and change pitch and EQ appropriately. In order to solve the puzzle in its intended music mode, one has to memorize and understand not only melody and rhythm, but also its timbre and possibly other characteristics. With non-musical types of stimuli like speech and environmental sounds, decisions for solving will rely on additional cues for music such as language and meaning.

2.1. Sound Objects

Sound objects are generated as fragments of the original recorded audio following a “shake the tablet” action to mimic the concept of breaking a fragile ball into pieces. The number of objects generated depends on shaking force. Segmentation is done by dividing the whole file into fragments of equal duration. These sound objects are in the game connected with fast crossfades (5 ms) to avoid zero-crossings. A random selection of objects is further modified in pitch or EQ, or both; in the latter cases the task difficulty will naturally increase [43]. Both segmentation and modifications are performed in real time.
The sounds are modified using filtering (EQ) by adjusting the energy of the low and high frequency components in the audio signal; this is done either by a low-pass or a high-pass filter (using the standard Pure data objects lop~ and hip~). The low-pass filter has a cutoff frequency at 2000 Hz, which means everything above this cutoff in the sound is attenuated (i.e., no treble). The high-pass filter has a cutoff frequency at 500 Hz, attenuating everything below (i.e., no bass). In either modification, the important frequency range of 500–2000 Hz is left intact. Cutoff frequencies were determined from experimentation and observation.
The pitch is modified either by −1000 cents, −500 cents, +500 cents, or +1000 cents; a 100-cent change corresponds to a semitone pitch shift. Even pitch changes were determined by experimentation. The modulations are done in the frequency domain which leave durations unaltered (using a modification of the Pure data patch I07-phase.vocoder). All sounds used for the experiment were uncompressed mono audio files sampled at 44.1 kHz to facilitate the frequency-domain manipulations.

2.2. User Interface and Gameplay

Figure 1 shows the different screens of the user interface. First, in Figure 1a, the user listens to the target piece to reconstruct by tapping the large ball. Then, the user shakes the tablet to break apart the target piece into several fragments (sound objects) represented by small identical balls; Figure 1b shows the resulting display after shaking the tablet.
The intended gameplay is to
  • tap and listen to the sounding objects
  • long press and adjust EQ and pitches
  • drag and arrange the objects horizontally from left to right
  • click the menu item to check and evaluate the solution
where the steps can be executed in any order and repeated in a trial-and-error procedure.
The four square buttons at the top of the screen (see Figure 1b) are used for evaluating the order, pitch, and EQ (How did I do?), replay the target sound (Play solution), play the current arrangement of objects as it appears on the screen (Play the current), and there is a final option to end the session (Oh, I give up). Two buttons in the lower left are “cheat buttons”, described shortly.
EQ and pitch modification dialogs, as seen in Figure 1c, are activated by a long press on an object. Radio buttons are presented in random order and colors so as to not provide visual cues that could help solve the puzzle. Each press on a radio button will play the sound with selected adjustment.

2.3. Game Difficulty

The difficulty level is determined by the number of pieces generated, the pitch shifts, the filtering, and last but not least the characteristics of the sound recording. When the shaking yields many pieces, the game is in most cases more difficult because the durations of the sound objects get shorter and all other factors that increase difficulty are more likely to occur and have a larger impact. In the used version, the game had to be solved perfectly to be counted as accomplished; for other purposes, the threshold for success could be adjusted.
For both modulation types (pitch and EQ), it is possible to set any arbitrary values in a text file, and in this way augment the game with increasing difficulty and levels. However, following a testing phase with intended users, it was not considered necessary at this point.
Sound objects will also constitute a difficulty differentiation. Shake force determines the number of generated objects, and the difficulty naturally increases with more objects. Object duration is determined by their number and also by the target’s total duration. Finally, the cut points of the objects may come at any place in the original sound: imagine for instance a drum loop of four bars cut into three objects (easier) compared to four objects (harder), or a piece cut into a large number of objects; some may end up with only silence. In our experiment this was not an issue, but for further use one should apply an automatic analysis of the target sound to avoid unsolvable puzzles.
Alternatively, if the puzzle gets too hard to solve, the pitch and EQ modifications can be automatically corrected using the Pitch Cheat and EQ Cheat buttons in the left bottom corner of the screen. Furthermore, the user can choose to replay the target sound by clicking Play solution. Cheat buttons and the replay function can be deactivated in the settings file.

2.4. Preparation and Data Collection

Before using the game for experimental purposes, the difficulty settings text file should be edited and a collection of pieces should be prepared in folders. Any type and duration of audio recording can be used.
A game session starts when a player listens to the target sound (clicking the large ball) and ends with the final notification in Figure 1d if the participant can build the target sound, or when clicking the “give-up button” . Each game session is recorded in a log file. The log file includes time-stamped information about the session and all the user’s actions on the touchscreen.

3. Materials and Methods

An experiment was designed to collect gaming data and user evaluations. The experiment was conducted in accordance with the Declaration of Helsinki and with ethical approval from the involved universities. All included data are anonymous. Each participant was carefully briefed about the experiment and signed a consent form to participate. In the study, we only look at various time measurements, frequencies of interactions, and preference of sound material. In addition, participants were informed about hearing acuity and music listening experiences. We conducted three sets of gaming sessions with a total of 36 participants (14 female). They were university students of ages from 18 to 23, and were recruited into equally-sized groups as follows:
GroupHearingLanguageMusical experience
HIImpairmentJapanese
NEXNormalJapaneseLow
EXPNormalJapaneseHigh
The Japanese hearing-impaired participants (HI) group was recruited from a university for hearing-impaired technology students. Eleven participants had profound degrees of hearing loss, while one participant had severe hearing loss [62]. Profound loss is considered to be above 90 dB, and severe in the range 70–90 dB. Hearing loss and acuity are measured with audiometers and expressed in decibels hearing level (dB H L ). Because of the human ears’ characteristic of perceiving sounds differently depending on frequencies, decibels sound pressure level (dB S P L ) cannot show hearing acuity by frequency [63]. Eleven of the participants used hearing aids in the experiment and one was a cochlear implantee. They could use their hearing aids according to their own preferences with two intended benefits: for their comfort, and because this would approximate their typical listening situation. Though the research with hearing-impaired persons tends to focus on either cochlear implanters or users of hearing aids, we do not divide them according to their hearing devices because our research interest is to provide them the opportunities to listen to sounds with joy.
The recruitment of Japanese hearing participants with low musical experience (NEX) and Japanese hearing participants with high musical experience (EXP) was based on their self-assessment of engagement in musical activities; NEX were recruited among students without ongoing music activities, while EXP had formal activities. As a simplified measure of music activity, we asked them to rate their musical experience in terms of listening to music in everyday life with respect to five levels ranging from very rare to very often; the question included examples of listening situations. Figure 2 shows the ratio of their music-listening experiences. For HI, musical experience was registered, but not used as a qualifier for inclusion in the experiment. There was one hearing-impaired participant who did not listen to music, but otherwise the musical experience of HI and NEX matched very well (the summed ratios of ‘very often’ and ‘often’, and of ‘rarely’ and ‘very rarely’ were the same between the two groups). One-way analysis of variance (ANOVA) shows there are significant differences in music-listening experiences ( p = 0.02 ), and from multiple comparisons differences are seen between HI/EXP and NEX/EXP.

3.1. Game Material

We prepared sound sets for four game “levels”, each with three game modes consisting of speech, music, and mixed sound material. We will use initial capital letters to denote that the puzzle condition is based on a Music, Speech, or Mixed target piece. The sound sets did not give the game levels increased difficulty; this was handled according to the above. The Speech pieces did not contain any musical sounds, and the Music pieces did not contain any vocals. The Mixed pieces were simply the combination of one Speech and one Music piece sound file mixed to a new mono sound file. All pieces were 15 s long and normalized in Audacity (http://manual.audacityteam.org/man/normalize.html) to have the same peak amplitude.
The speech recordings were from commercially available recordings by Japanese poetry readers, both female and male. Sets 1 and 4 were from old Japanese poems, while Set 2 was from a Japanese translation of an English poem, and the reading of Set 3 was from a Japanese pop song. Most Japanese young people would be familiar with the poems and the pop song.
The music recordings were excerpted from cello performances of well-known compositions. Three were by Japanese composers working in the field of classical music in films, and one was composed by Fauré. They were chosen based on pre-studies of the Music Puzzle, (see e.g., [9]). Before deciding on a recording, we evaluated its suitability by listening to the mix; the main condition was that the speech should be easily legible through the music. Table 1 shows the four sets of sound pieces. The order of sets 2–4 was randomized.
For the experiment, we installed the game and sound material on four Nexus 7 tablets and two Samsung Galaxy tablets. Audio was presented through headphones with large cups so as to fit and accommodate hearing aids; they could choose between Sony MDR-XD200 (closed type), Audio-Technica ATH-AD500X (open type), or their personal headphones if features were comparable.

3.2. Procedure

Each experimental session took one hour. First, there was an eight-minute preparation that consisted of reading an explanation of the game purpose, how to play, and a short demonstration. After that, participants gave their consent and other information such as their musical experience and hearing levels. They tried Set 1 as training for 15 min; then they proceeded to play with sets 2–4 for 35 min. We encouraged but did not require them to play with all modes (sound material) in a set, and with no specific order. Finally, participants gave post-descriptions of their preference regarding the sound material and the experienced difficulty.
The experiment took place in a classroom setting with 2–6 students at a time. Each participant had a tablet and headphones. They were instructed to adjust the sound volume to a comfortable level and were free to readjust this setting when necessary. Also, they were allowed to take breaks if needed. They received a token gratitude of about USD10 for their participation.

4. Results

We describe the results of the experiments by comparing the three participant groups in subjective evaluations and the way participants played the game. For comparison of the groups, analysis of variance (ANOVA) was used. Post hoc analyses were performed with the Tukey–Kramer procedure on the independent observations, with the level of statistical significance set at p < 0.05 .

4.1. Games

Game sessions are divided into the number of sessions played and game achievement. Data were extracted from the log files.

4.1.1. Number of Sessions

The numbers of game sessions during the experiment for each participant group were HI = 81, NEX = 79, and EXP = 104. For HI, the ratio of playing Music was larger, the ratio of playing Mixed was smaller, while for Speech there were no differences between the groups. The number of game sessions and ratios of the specific materials are shown in Table 2.

4.1.2. Achievement of Games

Sessions could be ended in three ways: an achieved completed puzzle, the “give up option”, and the tablet’s back button. A session was considered “achieved” only when the order of sound objects, the EQ, and the pitch were correct—thresholds for correctness can be adjusted in the settings (see Figure 1d). A “give up” exit is recorded when a user presses this action button. When pressing the system’s back button, home button or power, the logged action is exit by back button; this is an unwanted action but not easily circumvented. It was also observed to happen by mistake.
Table 2(c) shows the ratio of achievement by each participant group. For all three modes (Speech, Music and Mixed), achievement by HI was less than for the other groups. In all modes, ANOVA showed significant differences between participant groups. The post hoc test shows that there were differences between HI and the other participant groups with all modes.

4.2. Subjective Evaluation

The subjective evaluations were collected from questionnaire data both during and after the sessions. After each game, its difficulty was rated on a five-level scale. A control question queried about which sound material that was heard to confirm that the sounds were played properly. We recorded no errors in determining the mode for the hearing groups, but some for HI—this was also an anticipated result and does not imply errors in the playback.

4.2.1. Fondness

The post-activity questionnaire asked participants about the game in terms of enjoyment. The results of rated fondness derived from “how entertaining” the Music Puzzle is and preferences towards material are shown in Figure 3a,b respectively. Questions were answered as follows:
  • How entertaining was the game? Hearing participants with low reported musical activity (NEX) gave the lowest evaluation. Overall, 3 of 12 found it to be “boring” and only half found it to be entertaining. In the other groups, 28 out of 36 gave ratings that the game was entertaining, and only 1 person found it boring.
  • Which material do you like the best? The three groups showed a difference in their preferred sound mode. The preference for the Music mode was greatest in the HI group, while NEX clearly preferred the Mixed mode. None of the groups rated the Speech condition highly.
  • Would you use the game if it was free? With similar distribution across all groups, 69% answered they would use the Music Puzzle if it was free. The game is not currently available in the Android or iOS app stores, and there are no plans to charge for use when it is publicly released.

4.2.2. Difficulties

We asked participants to rate the difficulty from very difficult to very easy using five levels for each game session. In Table 2(d), the ratios of ratings with low difficulty (the three lowest levels) are elicited for each group and mode. For hearing participants NEX and EXP, both Mixed and Speech modes were considered easy, while HI found the Speech mode to be hard. NEX and EXP differed in the rating of music stimuli, where NEX even rated the Music mode as harder than the HI group did. HI had similar ratings for the Speech and Music modes, but differed in their rating of the Mixed mode.
The differences between the participant groups were shown by ANOVA in the modes Speech and Mixed, where all p-values were small ( p 0.01 ). The multiple comparison showed differences between HI-NEX and HI-EXP in both modes. As described above, two groups were formed for the Speech and Mixed modes: one consisting of HI and the other consisting of hearing participants (NEX and EXP). Table 3 rows (a)–(c) summarize the results of multiple comparison on the number of performed game sessions, the ratio of achieved game sessions, and the subjective evaluation of difficulties, by each mode.

4.3. Interaction

The way participants played Music Puzzle can be described in terms of clicks on sound objects and buttons. We recorded all screen interaction, including the sound objects and interface buttons described earlier. In the following, we analyze game interaction and playing strategies using the number of clicks and time measurements.

4.3.1. Pitch and EQ Cheating

When a user clicks “Pitch Cheat” or “EQ Cheat”, pitch alterations and filtering, respectively, are corrected for all sound objects. Since a cheat is persistent and thus available only once in a game session, we looked at the ratio of using cheat buttons in all sessions for each type of material. Figure 4a,b shows these ratios; HI used cheat buttons in about the half of the game sessions, and the other groups used these functions more sparingly. There was little difference between the two cheat modes: as it appears, HI in particular tended to use both buttons when “cheating”.

4.3.2. Adjusting Pitch and EQ

When a user decides to adjust pitch or EQ (Figure 1c), clicking the radio buttons will play the available variations. The only way to find the correct is by listening, thus the number of clicks can tell how many trials are needed to identify the unaltered one. Figure 4c,d show these numbers, and seemingly, the HI perform better than the other participant groups. As we will discuss, this is, however, a consequence of the problems of discriminating timbre and pitch differences among HI which leads to activating the cheats. Note that for pitch there are five options, while EQ only has three; this is observable in the figure.

4.3.3. Interaction with Sound Objects and Buttons

The interaction can be divided into compulsory and optional actions. It is necessary to click the sound objects and listen in order to arrange them in the correct order. One can click two or more objects in succession to play a sequence. Furthermore, the evaluation “How did I do?” must be clicked at least once for completing a game. However, players do not need to listen to the target (solution) sound or the current sound during a game session. The function buttons “Play Solution”, “Play the current” and “How did I do?” can be clicked any number of times. The plots in Figure 5 show the average number of clicks on sound objects and function buttons.
Table 3 rows (d)–(g) show the results of multiple comparison to see the differences between participant groups on button clicks. Similar to the difficulties in Table 3(c), the HI group is in contrast to the other groups in the tendency to click on “Play Solution”. HI evaluated the games (“How did I do?”) more often than hearing groups. There were no differences for “Play the current”, and the gameplay did not require one to click it.

4.3.4. Duration and Speed

Duration is defined here as the time it takes to finish a session, whether the session was successfully achieved or not. We also calculate the time between clicks, or inter-onset intervals (IOI) of clicks. Here we include subsequent clicks of either object clicks, pitch or EQ changes, play solutions, and playing the current. Speed is a reciprocal of the duration per click and represents the swiftness in interaction and gameplay. Figure 6a,b show game duration and IOI by each participant group and for each mode.
Differences between participant groups were found in the duration of game materials Speech and Mixed, and there were also differences in IOI found for all modes. It should be noted that calculating IOI in this kind of game is not trivial because some actions will necessitate longer intervals than others, and the results must be interpreted with some caution. Table 3(h),(i) shows the results of multiple comparison to find the differences between participant groups and material for duration and IOI. The differences between modes on time measurements were p 0.01 where the Music mode was different from both the Speech and Mixed modes.

4.4. Summary

A summary of the effects and found differences that are mentioned in this section is shown in Table 4 (Table 3 shows the results, while Table 4 shows the explanation of the grouping). We consider hearing acuity, level of music experience, and also language proficiency based on the assumption that the hearing-impaired generally show language disorders [3].
Differences in hearing acuity are evident for the HI group, and in music experience for the EXP group. For language proficiency, HI differs from both NEX and EXP.

5. Discussion

Through comparing the results of the three participant groups, we discuss the effects hearing acuity, music experience, and language proficiency have on the outcome of playing Music Puzzle. We also consider how people with and without hearing loss enjoy playing the game.

5.1. The Effect of Hearing Acuity

Since hearing loss affects the proficiency of playing Music Puzzle, it follows that the experiment would reveal differences between the HI participant group and the others (NEX and EXP): these are summarized in the first row of Table 4, and more details can be found in Table 3. The following results from multiple comparisons show differences due to hearing loss:
  • Lower ratio of the performed Mixed mode (Table 2(b)).
  • Lower ratio of achieved sessions for the Speech mode (Table 3(b)).
  • Higher ratio of clicks on “Pitch cheat” for Speech and Mixed modes (Figure 4a).
  • Higher ratio of clicks on “EQ cheat” for Speech and Mixed modes (Figure 4b).
  • Fewer attempts at “change pitch” in all modes (Figure 4c).
  • Fewer attempts at “change EQ” in all modes (Figure 4d).
  • Higher number of clicks on “How did I do?” for Music and Mixed modes (Table 3(g)).
  • Longer inter-onset interval (IOI) of clicks for all modes (Table 3(i)).
These findings lead to considering the following possible interpretations:
  • Hearing-impairment introduces difficulties in extracting useful cues from both music and speech played simultaneously, and from “listening to” speech. Differences between people with and without hearing loss were found for the Mixed condition in the ratio of achieved sessions. This implies that the overlapping of sounds in speech and music makes the game harder to solve for people with hearing loss, or that those with normal hearing can better utilize the additional cues. In the ratio of the performed modes, differences between people with and without hearing loss were found in Speech. In other words, speech, more than music, is difficult for HI. Language proficiency will thus be helpful, but the game’s puzzles are still not easily solved by constructing lexical meaning from the fragments; these fragments are short, and the poems relatively intricate.
  • Hearing-impairment introduces difficulties in distinguishing pitch alterations and filtering. Cheat buttons were used more often by people with hearing loss, and they experienced a greater challenge in correcting pitch and EQ. Cheat buttons were also used by hearing participants when they played with Music material. This implies that remembering nuances of pitch and EQ adjustments in music was also difficult for people without hearing loss.
  • Persons with hearing-impairment take longer to process a heard sound. The study showed that people with hearing loss wait longer after clicking the sound object button or other buttons before clicking a new one. We know that hearing acuity does not correspond with problems of interacting with computers [64], thus the reasons for timing differences could be: (1) they listen to the whole sound from a sounding object or the effect of other buttons, while people without hearing loss only listen to the start, or listen only a certain extent; (2) they listen to sound then think for a while; or (3) the time to start processing sound could be later for HI. Considering that the lengths of the fragments correspond to the intervals recorded by NEX and EXP (which means these groups would click the next sound object without hesitation), (1) is a less plausible explanation. We conclude that HI adopt a focused listening strategy which involves longer time for processing the played sounds.

5.2. The Effect of Music Experiences

We found no distinct differences that could be explained only by music experience in this experiment, as seen in the right column comparing NEX and EXP in Table 3, but in the next section we will discuss effects that appeared in combination with speech material and language. We should remember that the less musically-experienced hearing group in this experiment is comprised of typical university students who still have a comparatively high exposure to music.
As was described in our previous paper [65], one particular individual with hearing loss who had a lot of musical activities was able to achieve all the sessions she tried in that experiment. Thus, introducing a group of people with hearing loss with a lot of musical activities may also yield different results.

5.3. The Effect of Language and Speech

While all the participants were native Japanese speakers, HI did not have equivalent language fluency in listening to speech (cf. [3]), and we can thus make two clusters of HI and NEX/EXP. Effects of language can be found in the differences shown in the row titled “Speech and language” in Table 4. One effect is found in the number of clicks on sound objects in playing with Speech material as shown in Table 3(d). This shows the difficulties of using language cues in solving the puzzle, for instance from remembering fragments of a spoken sentence. Other effects are found in the following cases when playing Speech and Mixed materials:
  • Subjective evaluation of difficulties (Table 3(c)).
  • Time to complete a game session (Table 3(h)).
These show no differences for music listening in the two clusters; while speech will be more problematic for hearing-impaired who can use fewer cues in constructing a meaningful whole, solving for music is comparable for HI and NEX/EXP. Even the third row in Table 4 shows no differences between NEX and EXP. Differences were found with HI for the duration of game completion when playing in the Mixed and Speech modes (Table 3(h)). However, NEX took longer to play the Music mode than EXP, rating it more difficult (Table 2(d)). They also liked it less (Figure 3). This implies that musical experiences affect play after all.

5.4. General Discussion

The game was generally well received, as seen in Figure 3a; in fact, regardless of hearing loss, 70% claim they would use it—although this is a speculative measurement, the proportionality between groups is illustrious. As could be expected, the results of this experiment show that people with hearing loss could not complete puzzles to the same extent as the hearing control groups. However, they enjoyed the puzzles and liked the Music mode the best among the three sound materials. This implies that people with hearing loss have good motivations for music listening through the game. It is worth noting that the experiment allowed the participants to choose modes quite freely, and that an alternative test design would highlight other issues in addition to preference.
A related and not as expected finding was that NEX and EXP rated the Music mode as rather difficult. NEX did not prefer the Music mode, while EXP did. Could it be that this attitude in NEX was caused by believing that there were external expectations about understanding music that they needed to fulfill? Possibly they show anxiety in terms of making mistakes which are not seen as clearly for EXP who would likely have a more analytical approach towards music listening.
The number of clicks for adjusting pitch was much higher for the Music condition than the others. This should mean that pitch manipulations in the Speech condition were more easily detected (our material had both male and female readers). Because Japanese is a pitch-accent language [66], prosodic cues are probably used in solving the puzzle; this would need to be investigated further through speech material with fewer pitch variations.
As seen, the HI spent much more time on pitch and EQ adjustments for all conditions. The most probable reason that we can see is that the task was just too hard, and they simply gave up and used cheat buttons. During the experiment design and set-up phase, it became very clear that our initial values for manipulations were far too subtle. The implemented pitch shifts of 5 and 10 semitones, and the filter thresholds of 2000 Hz for low-pass and 500 Hz for high-pass are to a normal hearing person easily identifiable, except possibly for pitch in music pieces with solo instruments. Even the maximum possible number of generated objects was reduced from around 18–20 to around 6–7 pieces.
The description of the waiting time between two clicks relates to the way people with hearing loss are playing the game. Currently, though it is not clear what the reason is for them to wait longer after clicks, this could depend on whether they remember any elements of music. If this is so, then it is helpful to understand what they remember in helping to enjoy music more.

5.5. Further Development

In its current gameplay design and aesthetics, the game leaves much to be desired in order to compete with the attractiveness of trending games on the market. However, the functionality worked according to planned use, and the material was sufficient for the scope of testing. From here, as we now consider the concept to be verified to be beneficial as a training tool, the Music Puzzle will be subject to changes in: (1) graphical design and interaction; (2) game types and difficulty progression; (3) sound material and transformations; (4) logging and social connectivity; (5) targeted training recommendations; and likely (6) a platform change to Web Audio (see http://www.w3.org/TR/webaudio/).

6. Conclusions

The Music Puzzle—an audio-based puzzle game—had the purpose of giving persons with hearing loss an effective and entertaining alternative exposure to focused listening. Research has shown that focused listening is beneficial for training listening ability, and also language development. The game was tailored for hearing impairments, but was designed to be engaging for everyone. The game does not require much from the user in terms of previous training or gaming experiences, and was designed to be inclusive for the hearing-impaired (the game has since then been developed with alternative interfaces and for other purposes not reported here). Despite the fact that people with hearing loss could not complete nearly as many started games as their hearing peers, the task was still not found to be too difficult and it can give new prospects for voluntary, focused listening to music or other sounds.
Care should be taken in selecting sound materials and in designing the gameplay to accommodate differences in processing time for sounds. Although speech and language are important objectives for training, music was both found to have appreciated qualities and was preferred by the target user group. Music was also found to be more challenging as a game task in general.
Many music training programs require instruction from a professional in an equipped, dedicated space, but the Music Puzzle is a game designed to be used at leisure. Any persons with access to commonplace technology such as smartphones or tablets can play at any place alone at any time. This way, opportunities for listening to music attentively increase with no special resources: the game can be acquired and used for free, expanded upon, and used for different purposes. With development of its design, the Music Puzzle is a conceptually different and attractive audio game.

Supplementary Materials

Supplementary materials can be accessed at: https://www.mdpi.com/2076-3417/7/12/1278/s1.

Acknowledgments

This work was supported by JSPS KAKENHI Grant Number 26282001.

Author Contributions

K.F.H. and R.H. contributed equally in conceiving, designing and performing the experiments, analyzing the data, and writing the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HIJapanese hearing-impaired participants
NEXJapanese hearing participants with low musical experience
EXPJapanese hearing participants with high musical experience
IOIInter-onset interval
EQEqualisation

References

  1. Torppa, R.; Huotilainen, M.; Leminen, M.; Lipsanen, J.; Tervaniemi, M. Interplay between singing and cortical processing of music: A longitudinal study in children with cochlear implants. Front. Psychol. 2014, 5, 1389. [Google Scholar] [CrossRef] [PubMed]
  2. Looi, V.; Gfeller, K.; Driscoll, V.D. Music appreciation and training for cochlear implant recipients: A review. Semin. Hear. 2012, 33, 307–334. [Google Scholar] [PubMed]
  3. Delage, H.; Tuller, L. Language Development and Mild-to-Moderate Hearing Loss: Does Language Normalize With Age? J. Speech Lang. Hear. Res. 2007, 50, 1300. [Google Scholar] [CrossRef]
  4. Gaver, W.W. What in the World Do We Hear?: An Ecological Approach to Auditory Event Perception. Ecol. Psychol. 1993, 5, 1–29. [Google Scholar] [CrossRef]
  5. Tremblay, K.; Kraus, N.; McGee, T.; Ponton, C.; Otis, B. Central auditory plasticity: changes in the N1-P2 complex after speech-sound training. Ear Hear. 2001, 22, 79–90. [Google Scholar] [CrossRef] [PubMed]
  6. Schäfer, T.; Sedlmeier, P.; Städtler, C.; Huron, D. The psychological functions of music listening. Front. Psychol. 2013, 4. [Google Scholar] [CrossRef] [PubMed]
  7. Song, J.H.; Skoe, E.; Banai, K.; Kraus, N. Training to Improve Hearing Speech in Noise: Biological Mechanisms. Cereb. Cortex 2011, 22, 1180–1190. [Google Scholar] [CrossRef] [PubMed]
  8. Hansen, K.F.; Hiraga, R.; Li, Z.; Wang, H. Music Puzzle: An audio-based computer game that inspires to train listening abilities. In Advances in Computer Entertainment; Lecture Notes in Computer Science; Reidsma, D., Katayose, H., Nijholt, A., Eds.; Springer International Publishing: New York, NY, USA, 2013; Volume 8253, pp. 540–543. [Google Scholar]
  9. Hiraga, R.; Hansen, K.F. Sound preferences of persons with hearing loss playing an audio-based computer game. In Proceedings of the 3rd ACM International Workshop on Interactive Multimedia on Mobile & Portable Devices, Barcelona, Spain, 22 October 2013; ACM: New York, NY, USA, 2013; pp. 25–30. [Google Scholar]
  10. Venail, F.; Camilleri, M.; Lorenzi, A. What’s a Hearing Impairment? A tinnitus? Available online: http://www.cochlea.org/en/impairment (accessed on 30 November 2017).
  11. McCullagh, J. Auditory Acuity. In Encyclopedia of Autism Spectrum Disorders; Springer: New York, NY, USA, 2013; p. 312. [Google Scholar]
  12. Glennie, E. Teach the World to Listen. Available online: http://www.evelyn.co.uk (accessed on 30 November 2017).
  13. Whittaker, P. Dr. Paul Whittaker OBE. Available online: http://www.paulwhittaker.org.uk/ (accessed on 30 November 2017).
  14. Lane, D. Pianist & Artistic Director. Available online: https://britishmusiccollection.org.uk/article/artistic-director-music-and-deaf-danny-lane (accessed on 30 November 2017).
  15. Gallaudet Dance Company. Available online: http://www.gallaudet.edu/department-of-art-communication-and-theatre/gallaudet-dance-company (accessed on 30 November 2017).
  16. Enriching Lives through Music. Available online: http://www.matd.org.uk (accessed on 30 November 2017).
  17. Music for Little Ears. Available online: http://hear-me-now.org/preschool-music-class/ (accessed on 30 November 2017).
  18. Feel the Music. Available online: http://mahlerchamber.com/learning/education-and-outreach/feel-the-music-programme (accessed on 30 November 2017).
  19. NDCS Resource: How to Make Music Activities Accessible for Deaf Children and Young People. Available online: http://www.ndcs.org.uk/document.rm?id=8830 (accessed on 30 November 2017).
  20. Hash, P.M. Teaching instrumental music to deaf and hard of hearing students. Res. Issues Music Educ. 2003, 1, 1–8. [Google Scholar]
  21. Darrow, A.A. The role of music in deaf culture: Deaf students’ perception of emotion in music. J. Music Ther. 2006, 43, 2–15. [Google Scholar] [CrossRef] [PubMed]
  22. Pereira, C.S.; Teixeira, J.; Figueiredo, P.; Xavier, J.; Castro, S.L.; Brattico, E. Music and emotions in the brain: Familiarity matters. PLoS ONE 2011, 6, e27241. [Google Scholar] [CrossRef] [PubMed]
  23. Fulford, R.; Ginsborg, J.; Goldbart, J. Learning not to listen: The experiences of musicians with hearing impairments. Music Educ. Res. 2011, 13, 447–464. [Google Scholar] [CrossRef]
  24. Brandt, A.K.; Slevc, R.; Gebrian, M. Music and early language acquisition. Front. Psychol. 2012, 3. [Google Scholar] [CrossRef] [PubMed]
  25. Kirschner, S.; Tomasello, M. Joint music making promotes prosocial behavior in 4-year-old children. Evolut. Hum. Behav. 2016, 31, 354–364. [Google Scholar] [CrossRef]
  26. Fujioka, T.; Ross, B.; Kakigi, R.; Pantev, C.; Trainor, L.J. One year of musical training affects development of auditory cortical-evoked fields in young children. Brain 2006, 129, 2593–2608. [Google Scholar] [CrossRef] [PubMed]
  27. Kraus, N.; Chandrasekaran, B. Music training for the development of auditory skills. Nat. Rev. Neurosci. 2010, 11, 599–605. [Google Scholar] [CrossRef] [PubMed]
  28. Shahin, A.J.; Roberts, L.E.; Chau, W.; Trainor, L.J.; Miller, L.M. Music training leads to the development of timbre-specific gamma band activity. Neuroimage 2008, 41, 113–122. [Google Scholar] [CrossRef] [PubMed]
  29. Strait, D.L.; Slater, J.; O’Connell, S.; Kraus, N. Music training relates to the development of neural mechanisms of selective auditory attention. Dev. Cognit. Neurosci. 2015, 12, 94–104. [Google Scholar] [CrossRef] [PubMed]
  30. Tierney, A.T.; Krizman, J.; Kraus, N. Music training alters the course of adolescent auditory development. Proc. Natl. Acad. Sci. USA 2015, 112, 10062–10067. [Google Scholar] [CrossRef] [PubMed]
  31. Jäncke, L.; Sandmann, P. Music listening while you learn: No influence of background music on verbal learning. Behav. Brain Funct. 2010, 6, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Barton, C. Music and literacy development in young children with hearing loss: A duet. Imagine 2011, 2, 53–55. [Google Scholar]
  33. Gfeller, K.; Driscoll, V.; Kenworthy, M.; Van Voorst, T. Music therapy for preschool cochlear implant recipients. Music Ther. Perspect. 2011, 29, 39–49. [Google Scholar] [CrossRef] [PubMed]
  34. Chasin, M.; Hockley, N.S. Some characteristics of amplified music through hearing aids. Hear. Res. 2014, 308, 2–12. [Google Scholar] [CrossRef] [PubMed]
  35. Chasin, M.; Russo, F.A. Hearing aids and music. Trends Amplif. 2004, 8, 35–47. [Google Scholar] [CrossRef] [PubMed]
  36. Hopyan, T.; Peretz, I.; Chan, L.P.; Papsin, B.C.; Gordon, K.A. Children using cochlear implants capitalize on acoustical hearing for music perception. Front. Psychol. 2012, 3, 425. [Google Scholar] [CrossRef] [PubMed]
  37. Limb, C.J.; Roy, A.T. Technological, biological, and acoustical constraints to music perception in cochlear implant users. Hear. Res. 2014, 308, 13–26. [Google Scholar] [CrossRef] [PubMed]
  38. Roy, A.T.; Jiradejvong, P.; Carver, C.; Limb, C.J. Assessment of sound quality perception in cochlear implant users during music listening. Otol. Neurotol. 2012, 33, 319–327. [Google Scholar] [CrossRef] [PubMed]
  39. Nakata, T.; Trehub, S.E.; Kanda, Y. Effect of cochlear implants on children’s perception and production of speech prosody. J. Acoust. Soc. Am. 2012, 131, 1307. [Google Scholar] [CrossRef] [PubMed]
  40. Vongpaisal, T.; Trehub, S.E.; Schellenberg, E.G. Song recognition by children and adolescents with cochlear implants. J. Speech Lang. Hear. Res. 2006, 49, 1091–1103. [Google Scholar] [CrossRef]
  41. Sucher, C.M.; McDermott, H.J. Pitch ranking of complex tones by normally hearing subjects and cochlear implant users. Hear. Res. 2007, 230, 80–87. [Google Scholar] [CrossRef] [PubMed]
  42. Kong, Y.Y.; Cruz, R.; Jones, J.A.; Zeng, F.G. Music perception with temporal cues in acoustic and electric hearing. Ear Hear. 2004, 25, 173–185. [Google Scholar] [CrossRef] [PubMed]
  43. Caruso, V.C.; Balaban, E. Pitch and timbre interfere when both are parametrically varied. PLoS ONE 2014, 9, e87065. [Google Scholar] [CrossRef] [PubMed]
  44. Macherey, O.; Delpierre, A. Perception of musical timbre by cochlear implant listeners: a multidimensional scaling study. Ear Hear. 2013, 34, 426–436. [Google Scholar] [CrossRef] [PubMed]
  45. Rochette, F.; Moussard, A.; Bigand, E. Music lessons improve auditory perceptual and cognitive performance in deaf children. Front. Hum. Neurosci. 2014, 8, 488. [Google Scholar] [CrossRef] [PubMed]
  46. Petersen, B.; Weed, E.; Sandmann, P.; Brattico, E.; Hansen, M.; Sørensen, S.D.; Vuust, P. Brain responses to musical feature changes in adolescent cochlear implant users. Front. Hum. Neurosci. 2015, 9, 7. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, T.; Dorman, M.F.; Fu, Q.J.; Spahr, A.J. Auditory training in patients with unilateral cochlear implant and contralateral acoustic stimulation. Ear Hear. 2013, 33, e70–e79. [Google Scholar] [CrossRef] [PubMed]
  48. Wouters, P.; van Nimwegen, C.; van Oostendorp, H.; van der Spek, E.D. A meta-analysis of the cognitive and motivational effects of serious games. J. Educ. Psychol. 2013, 105, 249–265. [Google Scholar] [CrossRef]
  49. Boyle, E.A.; Hainey, T.; Connolly, T.M.; Gray, G.; Earp, J.; Ott, M.; Lim, T.; Ninaus, M.; Ribeiro, C.; Pereira, J. An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Comput. Educ. 2016, 94, 178–192. [Google Scholar]
  50. Andersson, U.; Josefsson, P.; Pareto, L. Challenges in designing virtual environments training social skills for children with autism. Int. J. Disabil. Hum. Dev. 2006, 5, 105–111. [Google Scholar] [CrossRef]
  51. Boyd, L.E.; Ringland, K.E.; Haimson, O.L.; Fernandez, H.; Bistarkey, M.; Hayes, G.R. Evaluating a Collaborative iPad Game’s Impact on Social Relationships for Children with Autism Spectrum Disorder. ACM Trans. Access. Comput. 2015, 7, 1–18. [Google Scholar] [CrossRef]
  52. Friberg, J.; Gärdenfors, D. Audio games. In Proceedings of the 2004 ACM SIGCHI International Conference on Advances in Computer Entertainment Technology—ACE’04, Singapore, 3–4 June 2004; ACM Press: New York, NY, USA, 2004. [Google Scholar]
  53. Rovithis, E. A classification of audio-based games in terms of sonic gameplay and the introduction of the audio-role-playing-game: Kronos. In Proceedings of the 7th Audio Mostly Conference on A Conference on Interaction with Sound—AM’12, Corfu, Greece, 26–28 September 2012; ACM Press: New York, NY, USA, 2012. [Google Scholar]
  54. Carvalho, J.; Guerreiro, T.; Duarte, L.; Carriço, L. Audio-based puzzle gaming for blind people. In Proceedings of the Mobile Accessibility Workshop at MobileHCI (MOBACC), Linz, Austria, 11–13 July 2012. [Google Scholar]
  55. Brieger, S. Sound Hunter: Developing a Navigational HRTF-Based Audio Game for People with Visual Impairments. In Proceedings of the Sound and Music Computing Conference; Bresin, R., Ed.; Logos Verlag: Berlin, Germany, 2013; pp. 245–252. [Google Scholar]
  56. Jaime, J.; Barbancho, I.; Urdiales, C.; Tardón, L.J.; Barbancho, A.M. A new multiformat rhythm game for music tutoring. Multimed. Tools Appl. 2015, 75, 4349–4362. [Google Scholar] [CrossRef]
  57. Baratè, A.; Ludovico, L.A. Serious games for music education. A mobile application to learn clef placement on the stave. In Proceedings of the International Conference on Computer Supported Education (CSEDU), Aachen, Germany, 6–8 May 2013; SciTe Press: Setubal, Portugal, 2013; pp. 234–237. [Google Scholar]
  58. Respino, J.; Juana, S.J.; Solamo, M.; Feria, R. Pitch paradise: A mobile game as an educational tool for music. In Proceedings of the 2011 9th International Conference on Education and Information Systems, Technologies and Applications (EISTA), Orlando, FL, USA, 19–22 July 2011. [Google Scholar]
  59. Zhou, Y.; Sim, K.C.; Tan, P.; Wang, Y. MOGAT. In Proceedings of the 20th ACM International Conference on Multimedia —MM’12, Nara, Japan, 29 October–2 November 2012; ACM Press: New York, NY, USA, 2012. [Google Scholar]
  60. Duan, Z.; Gupta, C.; Percival, G.; Grunberg, D.; Wang, Y. SECCIMA: Singing and ear training for children with cochlear implants via a mobile application. In Proceedings of the 14th Sound and Music Computing Conference, Espoo, Finland, 5–8 July 2017; pp. 200–207. [Google Scholar]
  61. Brinkmann, P.; Kirn, P.; Lawler, R.; McCormick, C.; Roth, M.; Steiner, H.C. Embedding Pure Data with libpd. In Proceedings of the Pure Data Convention, Weimar, Norway, 30 May–1 June 2011; Volume 291. [Google Scholar]
  62. How to Read an Audiogram and Determine Degrees of Hearing Loss. Available online: http://www.nationalhearingtest.org/wordpress/?p=786 (accessed on 30 November 2017).
  63. Bauman, N. Understanding the Difference between Sound Pressure Level (SPL) and Hearing Level (HL) in Measuring Hearing Loss. Available online: http://hearinglosshelp.com/blog/understanding-the-difference-between-sound-pressure-level-spl-and-hearing-level-hl-in-measuring-hearing-loss/ (accessed on 30 November 2017).
  64. Maiorana-Basas, M.; Pagliaro, C.M. Technology Use Among Adults Who Are Deaf and Hard of Hearing: A National Survey. J. Deaf Stud. Deaf Educ. 2014, 19, 400–410. [Google Scholar] [CrossRef] [PubMed]
  65. Hiraga, R.; Hansen, K.F.; Kano, N.; Matsubara, M.; Terasawa, H.; Tabuchi, K. Music perception of hearing-impaired persons with focus on one test subject. In Proceedings of the 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon Tong, Hong Kong, China, 9–12 October 2015; pp. 2407–2412. [Google Scholar]
  66. Tsujimura, N. An Introduction to Japanese Linguistics; John Wiley & Sons: New York, NY, USA, 2013. [Google Scholar]
Figure 1. The Music Puzzle gameplay interface as seen on a tablet. (a) Initiate a session, listen to the target music piece, and shake the tablet; (b) Listen and order sound objects by finger touch. There are four action buttons: How did I do? (evaluate current order), Play Solution (repeat target piece), Play the current (play the order as seen on screen), and Oh, I give up (quit the puzzle); (c) Adjust pitch and equalization (EQ; filtering) for each object. The radio buttons are randomly colored and ordered so as to not give any visual cues to the solution; (d) Completed puzzle with an evaluation.
Figure 1. The Music Puzzle gameplay interface as seen on a tablet. (a) Initiate a session, listen to the target music piece, and shake the tablet; (b) Listen and order sound objects by finger touch. There are four action buttons: How did I do? (evaluate current order), Play Solution (repeat target piece), Play the current (play the order as seen on screen), and Oh, I give up (quit the puzzle); (c) Adjust pitch and equalization (EQ; filtering) for each object. The radio buttons are randomly colored and ordered so as to not give any visual cues to the solution; (d) Completed puzzle with an evaluation.
Applsci 07 01278 g001
Figure 2. Music-listening experiences of the three participant groups hearing impaired (HI), and normal hearing with low (NEX) and high (EXP) music experience.
Figure 2. Music-listening experiences of the three participant groups hearing impaired (HI), and normal hearing with low (NEX) and high (EXP) music experience.
Applsci 07 01278 g002
Figure 3. Ratings of fondness of playing the game. (a) Answers to “how entertaining was the Music Puzzle?” (b) Preferred material by each participant group.
Figure 3. Ratings of fondness of playing the game. (a) Answers to “how entertaining was the Music Puzzle?” (b) Preferred material by each participant group.
Applsci 07 01278 g003
Figure 4. User interaction during play. The ratios of using cheat buttons for correcting all altered pitches (a); and filtering (EQ) (b); and the number of clicks needed to change pitch (c) and EQ (d).
Figure 4. User interaction during play. The ratios of using cheat buttons for correcting all altered pitches (a); and filtering (EQ) (b); and the number of clicks needed to change pitch (c) and EQ (d).
Applsci 07 01278 g004
Figure 5. Comparison of the players’ interaction during a game session. Number of clicks on (a) sound objects; (b) “Play Solution” replay target button; (c) “Play the Current”; and (d) the “How did I do?” evaluation button. Note that the scale is different in (a).
Figure 5. Comparison of the players’ interaction during a game session. Number of clicks on (a) sound objects; (b) “Play Solution” replay target button; (c) “Play the Current”; and (d) the “How did I do?” evaluation button. Note that the scale is different in (a).
Applsci 07 01278 g005
Figure 6. Duration and inter-onset intervals (IOI) in seconds. Durations for (a) game sessions; and for (b) IOI of clicks (one click to the next).
Figure 6. Duration and inter-onset intervals (IOI) in seconds. Durations for (a) game sessions; and for (b) IOI of clicks (one click to the next).
Applsci 07 01278 g006
Table 1. Speech and music material used in the experiments.
Table 1. Speech and music material used in the experiments.
SetSpeech ExcerptReaderAuthorMusic ExcerptComposer
1Under a cherry treeFemaleM. KajiiAprès un réveG. Fauré
2Do not stand at my grave and weepMaleM. AraiNausicaa requiemJ. Hisaishi
3LemonFemaleM. SadaAlways with meY. Kimura
4Not losing to the rainMaleK. MiyazawaCastle in the skyJ. Hisaishi
Table 2. Overview of game sessions with (a) the total number of sessions played for each of the three groups hearing impaired (HI), and normal hearing with low (NEX) and high (EXP) music experience, (b) the ratio of material played for each mode, (c) the ratio of completed sessions for each mode, and (d) the ratio of puzzles that were evaluated as easy (ratio = 1) for each participant group and mode.
Table 2. Overview of game sessions with (a) the total number of sessions played for each of the three groups hearing impaired (HI), and normal hearing with low (NEX) and high (EXP) music experience, (b) the ratio of material played for each mode, (c) the ratio of completed sessions for each mode, and (d) the ratio of puzzles that were evaluated as easy (ratio = 1) for each participant group and mode.
HINEXEXP
(a)Number of game sessionsTotal8179104
Game sessions ratioSpeech0.370.370.34
(b)Music0.400.320.34
Mixed0.230.320.33
Achieved sessions ratioSpeech0.350.931.00
(c)Music0.250.830.85
Mixed0.230.951.00
Evaluation of difficultySpeech0.530.830.92
(d)Music0.440.240.66
Mixed0.470.910.91
Table 3. Significant differences are shown using asterisks ( p < 0.05 *, p < 0.01 **). The differences between participant groups in (a) the ratio of game sessions; (b) the ratio of achieved sessions; (c) the subjective evaluation of game difficulty; (d) clicks on sound objects; (e) clicks on “Play Solution”; (f) clicks on “Play the current”; (g) clicks on “How did I do?”; (h) game duration for completing one puzzle; and (i) duration per click for a game session measured as inter-onset intervals (IOI).
Table 3. Significant differences are shown using asterisks ( p < 0.05 *, p < 0.01 **). The differences between participant groups in (a) the ratio of game sessions; (b) the ratio of achieved sessions; (c) the subjective evaluation of game difficulty; (d) clicks on sound objects; (e) clicks on “Play Solution”; (f) clicks on “Play the current”; (g) clicks on “How did I do?”; (h) game duration for completing one puzzle; and (i) duration per click for a game session measured as inter-onset intervals (IOI).
HI– NEXHI– EXPNEX– EXP
Ratio of game sessionsSpeech
(a)Music
Mixed *
Ratio of achieved game sessionsSpeech
(b)Music
Mixed
Difficulty of gameSpeech
(c)Music
Mixed
Clicks on sound objectsSpeech *
(d)Music
Mixed
Clicks on Play solutionSpeech
(e)Music
Mixed
Clicks on Play the currentSpeech
(f)Music
Mixed
Clicks on “How did I do?” (evaluation)Speech *
(g)Music*
Mixed**
Game durationSpeech
(h)Music
Mixed
IOI of clicksSpeech
(i)Music
Mixed
Table 4. Summary of significant differences between participant groups. The differences between participant groups concerning the effects of hearing loss, music experiences, and language proficiency.
Table 4. Summary of significant differences between participant groups. The differences between participant groups concerning the effects of hearing loss, music experiences, and language proficiency.
EffectsHI– NEXHI– EXPNEX– EXP
Hearing loss
Music experiences
Speech and language

Share and Cite

MDPI and ACS Style

Hansen, K.F.; Hiraga, R. The Effects of Musical Experience and Hearing Loss on Solving an Audio-Based Gaming Task. Appl. Sci. 2017, 7, 1278. https://doi.org/10.3390/app7121278

AMA Style

Hansen KF, Hiraga R. The Effects of Musical Experience and Hearing Loss on Solving an Audio-Based Gaming Task. Applied Sciences. 2017; 7(12):1278. https://doi.org/10.3390/app7121278

Chicago/Turabian Style

Hansen, Kjetil Falkenberg, and Rumi Hiraga. 2017. "The Effects of Musical Experience and Hearing Loss on Solving an Audio-Based Gaming Task" Applied Sciences 7, no. 12: 1278. https://doi.org/10.3390/app7121278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop