Next Article in Journal
DARGS: Dynamic AR Guiding System for Indoor Environments
Next Article in Special Issue
NEAT-Lamp and Talking Tree: Beyond Personal Informatics towards Active Workplaces
Previous Article in Journal
TaPT: Temperature-Aware Dynamic Cache Optimization for Embedded Systems
Previous Article in Special Issue
Promises and Pitfalls of Computer-Supported Mindfulness: Exploring a Situated Mobile Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data

1
Department of Applied Mathematics and Computer Science, Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
2
Eriksholm Research Centre, DK-3070 Snekkersten, Denmark
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Computers 2018, 7(1), 1; https://doi.org/10.3390/computers7010001
Submission received: 1 November 2017 / Revised: 9 December 2017 / Accepted: 20 December 2017 / Published: 23 December 2017
(This article belongs to the Special Issue Quantified Self and Personal Informatics)

Abstract

:
The lack of individualized fitting of hearing aids results in many patients never getting the intended benefits, in turn causing the devices to be left unused in a drawer. However, living with an untreated hearing loss has been found to be one of the leading lifestyle related causes of dementia and cognitive decline. Taking a radically different approach to personalize the fitting process of hearing aids, by learning contextual preferences from user-generated data, we in this paper outline the results obtained through a 9-month pilot study. Empowering the user to select between several settings using Internet of things (IoT) connected hearing aids allows for modeling individual preferences and thereby identifying distinct coping strategies. These behavioral patterns indicate that users prefer to switch between highly contrasting aspects of omnidirectionality and noise reduction dependent on the context, rather than relying on the medium “one size fits all” program frequently provided by default in hearing health care. We argue that an IoT approach facilitated by the usage of smartphones may constitute a paradigm shift, enabling continuous personalization of settings dependent on the changing context. Furthermore, making the user an active part of the fitting solution based on self-tracking may increase engagement and awareness and thus improve the quality of life for hearing impaired users.

1. Introduction

1.1. The Growing Societal and Personal Costs of Hearing Loss

There are enormous societal implications related to hearing loss that are estimated to top £25 billion a year in the United Kingdom alone, including reduced productivity, which decreases the economic output [1]. However, the personal costs are even more severe: hearing loss is considered one of the biggest risk factors for dementia. Livingston et al. estimate that a third of the lifestyle-related causes of dementia can be explained by untreated hearing loss in midlife, partially due to a decline in cognitive functions. Meanwhile, multiple studies have shown that “hearing aids can prevent or delay the onset of dementia” [2] and may attenuate cognitive decline [3], by both reducing cognitive load and improving working memory [4,5,6]. Despite the availability of devices, often fully covered by health insurance or through public health care, less than 5% of people suffering from a hearing loss address this by using a hearing aid [7]. Even after acknowledging the need, on average it takes hearing-impaired persons a decade before they acquire the devices [7]. Furthermore, less than 25% of those who have a hearing aid use them [8]. In a scoping study by McCormack and Fortnum, the top reasons for not using a hearing aid were that the devices did not provide sufficient benefits in noisy situations and there was a perceived poor quality of sound [9].
One may ask, why is it that people do not choose to use hearing aids, given the evidence of a high risk of incident dementia, and knowing that these could potentially alleviate cognitive decline? Studies analyzing outcome measures capturing the user satisfaction indicate that this is largely determined by two factors: (1) whether the user perceives an improved quality of life through use of the devices, and (2) to what degree they help overcome limitations when interacting with others around the user. The degree to which the user feels involved in the traditional clinical fitting process highly impacts the overall satisfaction [10]. Alternative models for selling hearing aids over the counter based on do-it-yourself audiometry tests may technically provide the same fitting as provided in a clinical setting [11]. However, the lack of dialogue and hearing care counseling has been shown to result in lower satisfaction. Actively involving the user in shaping the listening experience when adapting to the devices appears to be crucial.

1.2. The Lack of Personalization in Hearing Health Care

Currently, hearing aids are by default, fitted solely by relying on a pure-tone audiogram measurement. The audiogram defines the thresholds at which a sine wave tone can be perceived, in order to determine which frequencies should be amplified to compensate for the hearing loss. A mild hearing loss may involve a 20–40 dB decline across frequency bands, typically spanning from mid range (2–4 kHz) to high range (5–10 kHz). However, this test measures only the sensitivity to an artificially produced tone, rather than the sounds that characterize a normal listening experience. Killion points out that individuals with similar audiograms may have up to a 15 dB difference in their ability to understand speech in noisy environments [12]. Wendt et al. have further shown that individuals benefit from noise-reduction algorithms [13]. Likewise, Marozeau and Le Goff show that the concept of loudness is highly individual, which in turn may determine whether soft sounds should be amplified to provide added intensity or are merely perceived as unwanted moderately loud noise [14,15]. This highlights some challenges, even in clinical settings, to optimize the hearing experience. Today’s solution uses discrete steps, varying the thresholds in regard to noise reduction and attenuation [16]. In order to simulate real-life listening scenarios, clinicians are often limited to playing back a few audio clips, capturing situations such as attending to several talkers in a crowded cafe or a conversation in a car masked by background noise. More advanced solutions for simulating true listening scenarios, such as Oticon Sound Studio, enable the hearing care professional (HCP) to compose auditory scenes consisting of all sorts of environmental sounds, such as a drill hammer, a bird chirping or a crying child. In a lab setting, such simulations can optimize the fitting process, as found by Dahl and Hansen (2016) [17,18], as these make it easier to determine true user needs in simulated listening scenarios, potentially decreasing the number of follow-up visits to the clinic for follow-up fitting. However, a major challenge in hearing health care worldwide is the lack of audiological resources. Few, if any, HCPs have the option of extending the fitting procedure further to personalize settings, as the time allocated is highly constrained. Hence, the need for fundamentally different solutions is of high demand.

1.3. Learning Preferences From User Behavior

In a previous study, Laplante-Levesque et al. [19] investigated the usage of hearing aids and found two distinct types of behaviors: Users wearing the device from waking up until going to bed, in contrast to those using the hearing aids only when needed, possibly driven by external demands and context. However, all users have unique behavioral patterns. Aggregated data averaged over longer periods does not convey the fine structures of hearing aid usage. Without somehow establishing a dialogue between HCPs and users, it has up to now, not been possible to identify and learn preferences from these fine structures.
Instead, aiming to infer preferences by connecting directly to users through their smartphones, Aldaz et al. investigated the feasibility of using machine learning to predict the optimal settings, on the basis of the signal-to-noise ratio (SNR) and attenuation for the hearing aids. They found that half of the test subjects preferred the personalized settings [20]. Other attempts at using machine learning to optimize hearing aids have shown similar findings [21,22].

1.4. Making User-Generated Data an Essential Part of Hearing Health Care

Quantified self (QS) and personal informatics (PI) have increased in interest in the past decade. With the prevalent usage of smartphones and wearables, personal, quantifiable, and accurate data on everyday phenomena has become broadly available. Such data has been applied for health tracking within QS and covers a vast range of phenomena, including menstrual tracking [23], mental health in students [24,25], Post-traumatic stress disorder (PTSD) effects [26], sleep patterns [27] and diabetes management [28], to mention only a few. The examples illustrate that such data can lead to new personal discoveries, insights and improved health in terms of quality of life.
The Oticon Opn is the first hearing aid that is connected to the Internet and is able to interact with other Internet of things (IoT) devices, such as cars, smart light bulbs, music streaming or learning from cloud-based artificial intelligence (AI) services provided through the “if-this-then-that” (IFTTT) standard [29]. Essentially, hearing aids can, as U.S Food and Drug Administration (FDA) approved medical hearables, be considered state-of-the-art wearables capable of providing augmented hearing. From a technical point of view, a hearing aid is a miniature size IoT connected smart speaker, equipped with an omnidirectional microphone array. Combined with embedded advanced signal processing or neural networks, hearing aids may continuously adapt to learned user preferences or the features characterizing the changing soundscapes. Coupling the hearing aids with other sensor data, such as heart rate, motion and location, will add further insights to the context of soundscapes experienced throughout a day. Because of the unobtrusive placement behind the ear, this type of wearable can be worn during the majority of the waking hours. Investigating how the user adapts to the volume or changes program settings can provide additional information about individual sensitivity to noise, motivation to interact and the changing cognitive state. Not only the external context but also the user’s state, cognitive capabilities or sense of fatigue may affect how preferences are altered in order to cope with the changing listening scenarios during the day. This changing context may be stable over time, forming patterns repeated at specific hours of the day, on weekdays versus weekends, and varying over weeks, months or years. Thus, applying tracking methods from QS and PI can lead to insights into user preferences inferred from behavioral patterns and soundscape data.
This paper explores how to infer user preferences solely on the basis of user-initiated program and volume changes throughout a 9 month pilot study, without taking the corresponding soundscape data into account. These adjustments are converted into time-series data saved in the cloud, using IFTTT to transfer data. Previous studies have primarily used summarized historical data retrieved from the hearing aid software, whereas IoT devices may potentially learn from usage data, such as volume and program interactions, to dynamically adapt the hearing aids to behavioral patterns. In this study, we look at the long-term behaviors and patterns displayed for five test subjects over at least 9 months. This study investigates both daily, weekly and monthly interaction patterns, in order to highlight differences between weekdays and weekends, and changes in behavioral patterns when modifying device settings, as well as more general usage patterns, when aiming to personalize augmented hearing by learning from user-generated data. The hypotheses to investigate include the following: Do users wish to actively select alternative programs to individualize their listening experience? Do these preferences constitute unique behavioral patterns? Is it possible to identify specific coping strategies displayed in program and volume interactions over time?

2. Materials and Method

2.1. Participants

N = 6 participants volunteered for the study (six men), from a screened population provided by Eriksholm Research Centre. Age ranged from 49 to 76 (median age of 62.8 years). All participants had more than a year of experience using hearing aids. The participants suffered from a symmetrical hearing loss, ranging from mild–moderate to moderate–severe as described by the World Health Organization (WHO) [30]. All had an iPhone 4S or a newer model. Subject 6 was excluded because of missing data. The test subjects received financial compensation for transportation only. All test subjects had signed an informed consent before the beginning of the experiment. An overview of the subjects can be seen in Table 1.
Subject 1
worked in construction. This subject had a dynamic work environment including noisy construction sites, quiet meeting rooms and driving in between.
Subject 2
worked in the transportation sector as a bus driver. This subject was exposed to a constant noise level while at work. The subject retired half-way through the experiment. The subject returned to work in the last month of the experiment, only part-time.
Subject 3
worked in an office environment. This subject attended many meetings, including teleconferences. The subject reported that the acoustics in the canteen at work were poor. This subject had many international travels, spending time primarily on flights.
Subject 4
was retired. The subject spent several days a week playing cards, with a high noise level and several competing talkers. The subject lived an active life, including activities such as sailing, and was exposed to various sound environments.
Subject 5
worked in an office environment. The subject had many meetings in or out of the office, experiencing multiple auditory environments during weekdays.
Subject 6
worked in the naval industry, restoring boats and supervising team-building events on sailboats. This subject was subjected to heavy noise exposure from power tools, as well as engine and wind noise. The subject tended to wear the hearing aids when the noise was acceptable or otherwise was not obscured by hearing protection.

2.2. Apparatus

Each subject was equipped with two Oticon Opn hearing aids, stereo Bluetooth low-energy (BLE), 2.4 GHz (Oticon A/S, Smørum Denmark). All subjects used a personal iPhone 4S or newer iPhone models with Bluetooth 4.0. The logged data consisted of any user-initiated program change or volume change through the Oticon ON iPhone app, formatted as time-series data, transferred using IFTTT, stored in the cloud and shared via Google Drive. The hearing aids were fitted with four programs. The subjects were provided with a test user Google account prior to the experiment. This account was used for data collection, and the subjects had full ownership of the account and data and could terminate access, and thus the experiment, at any given time.

2.3. Procedure

The subjects were fitted with two Opn hearing aids by an audiologist. The hearing aids were fitted on the basis of a unique frequency-dependent volume amplification based on a pure tone audiogram for each subject. Each subject was fitted with four programs, through the Oticon Genie 2.0 release 17.1 Opn fitting software (Oticon A/S, Smørum, Denmark) on a PC with Windows 7, via a Sonic Innovations EXPRESS L i n k 3 (Sonic AG, Bern, Switzerland). The programs were changed after 3–4 months of use, half-way through the experiment.
Whereas hearing aids traditionally apply a beam-forming algorithm to make the auditory focus more narrow in noisy environments, the Opn devices instead omnidirectionally preserve all signals resembling voices while filtering out ambient noise. In the present experimental design, all four programs preserve any sounds with voice-like modulation characteristics, but to varying degrees for attenuated directional and diffuse background noise [16]. Rather than providing a default medium setting offering a compromise in terms of directionality and noise reduction, the four programs represent contrasting aspects of omnidirectionality, brightness and noise reduction. Assessing which programs are preferred, making it possible to assess how users apply aspects of omnidirectionality or noise removal, to spatially differentiate auditory streams, which is essential in order to cognitively separate and selectively attend to competing voices or interfering sounds [31]. There were three dimensions altered in this experiment: brightness and noise reduction, coupled with attenuation.
Brightness perception of sound is directly related to volume gain, primarily in high frequencies. Increasing brightness may contribute to interaural level difference (ILD), which may give up to a 20 dB difference in sound perception. Even without directly affecting the speech frequency spectrum, added brightness helps with separating streams by improving sound localization in the 10 kHz range related to the shape of the pinna. The experimental setup thus highlights whether the program usage provides sufficient spatial cues for separating the auditory sources in a given context. That is, the program usage reflects whether the users rely on binaural differences in loudness and head shadow to attenuate ambient noise and enhance the amplification of high frequencies, which improves sound localization [32,33], or actively reduce directional and diffuse noise [13] in order to cope with the changing auditory environments. An increased brightness results in further amplification of mid-frequency sounds, typically consonants, which improves speech intelligibility. However, added brightness may in some situations be perceived as too harsh, as other sounds with similarly high frequency characteristics will likewise be amplified and seem shrill.
The noise reduction includes both attenuation of interfering sounds not resembling voices coming from a specific direction, for example, a dog barking or a passing car. Additionally, it removes the amount of diffuse noise removal, such as background noise from an air-conditioning system. A low attenuation of directional sources without noise reduction preserves ambient sounds, resembling the natural dampening provided by the shape of the head and the ears, whereas a high attenuation of interfering sounds with non-voice characteristics coupled with noise reduction, artificially creates a better SNR.
In the experimental setup, P1 was always the default startup setting, which in each experiment was compared against the alternative programs listed below, and which is illustrated in Figure 1:
P1 
Resembling an omnidirectional perception with a frontal focus. Sounds from the sides and behind the listener are slightly suppressed to resemble the dampening effect due to the shape of the head and the pinna.
P2 
Similar to P1 but gently attenuating directional noise and removal of diffuse noise when encountering complex listening environments.
P3 
Similar to P1 but increasingly attenuates directional noise even in simple listening environments. Has less amplification in mid and high frequencies, producing a “rounder” or “softer” sound. Provides the highest amount of diffuse noise reduction.
P4 
Similar to P3 with even lower thresholds for attenuation of directional noise and diffuse noise removal in all listening environments.
P5 
Identical to P3 with regard to high attenuation and high noise reduction. Has added amplification in mid and high frequencies to provide a brighter sound.
P6 
Similar to P4 with high attenuation. However, no noise reduction is applied. Has an increased amplification in mid to high frequencies, producing a brighter sound.
P1 constituted the default program throughout the experiment. The choice of using P1 as a baseline was based on the acoustical characteristics of this program, which mimic the natural dampening of sounds due to the shape of the ears and the binaural shadowing effect of the head. The result is an omnidirectional focus with only a slight attenuation of sounds coming from behind and from the sides. Using P1 as a default program thus highlights when users actively select any other program, offering additional attenuation of noise or increased brightness, improving the spatial separation of sounds.
The experiment consisted of two periods. The first period ran from September 2016 to January 2017, and the second period, from February to June 2017. An intervention occurred in the middle of the experiment, to further investigate whether a change in programs also generated a corresponding change in user behavior. Programs 1 and 4 were available in both periods of the experiment. For the first half of the experiment, programs 1, 2, 3 and 4 were used. After the intervention, programs 1, 4, 5 and 6 were used, as illustrated in Figure 2.
For the first visit, the participants were instructed to “use the program that fits the situation the best” and “to use the hearing aids as you would normally, but primarily controlling it from the iPhone app”, in order to explore the programs and natural behavior. The test subjects were not informed about what the four programs represented. The test subjects were further encouraged to adjust the volume gain if needed.
The volume control does not reflect decibel values. It ranges from −8 to 4 and gives visual feedback to the user when interacting with the iPhone app.

3. Results

Even on the basis of the limited data collected in this pilot study, analyzing only the aspects of time and user interaction, while not considering cognitive capabilities, the individual differences between users are evident. These differences lead to different coping strategies, which highlights the need for personalyzing settings individually. However, the clinical resources in hearing health care are already overburdened, meaning that any further individualization would require that such preferences are automatically learned from user-generated data.
The behavioral patterns inferred from data in this pilot study indicate that users prefer to switch between highly contrasting aspects of omnidirectionality and noise reduction depending on the context. This is very different from the prescribed medium “one-size-fits-all” program, frequently provided by default in hearing health care. The key takeaway is that a single prescribed audiological setting did not fulfill the needs of the test subjects in this study. Rather than selecting one program offering a balance between omnidirectionality and noise reduction, the test subjects typically changed between programs that appeared highly contrasting in terms of attenuation or brightness.

3.1. Behavioral Patterns Inferred from User-Initiated Program and Volume Changes

The observed program and volume changes alter the perceived soundscape along several dimensions, attenuation, noise reduction and brightness, as described in the Methods section. Overall, the subjects of the experiment described in this paper primarily altered the settings along these three dimensions. For an overview of the programs, see Figure 1.
The selected programs thus reflect when a user prefers to increase the brightness to enhance spatial separation of sounds, which improves the ability to selectively attend to any sound, or remove diffuse noise and sounds that do not resemble voices, in order to increase the SNR and thereby improve speech intelligibility. With only five test subjects, we see different behavioral patterns in relation to usage time; see Table 2. This indicates that some users comply by wearing their hearing devices from when they wake up until when they reach bedtime, while others may selectively decide to wear their hearing devices only when they see a perceived benefit, depending on the context.
A more detailed percentage-wise split of the program distribution for each program is illustrated in Figure 3. The color for P1 is yellow, for P2 is dark yellow, for P3 is brown, for P4 is orange, for P5 is red and for P6 is maroon. The same color scheme is used throughout the paper.
This figure shows that three of the subjects preferred the default omnidirectional focus with added brightness more than 70% of the time. They alternated using programs providing more attenuation and directionality, such as P3–P6, when needed. Subjects 3 and 4 actively chose one or more programs with more attenuation and directionality (P4–P6), whereas subjects 1 and 5 used brighter sounding programs (P2 and P5) to cope with a changing context. We found that program P1 was preferred on average 66% of the time. This was significantly different from previous findings of respectively 33% [20] and 37% [34]. This could be due to this being the default program, or more likely, that it fulfilled the needs in most contexts by providing an omnidirectional frontal focus mimicking the natural dampening of sounds from behind and from the sides, caused by the shape of the ears and the shadowing effect of the head.
The usage patterns indicate that one program may rarely be adequate, as most users have a need for more than one program to cope with the changing context. Even in a small test population, it becomes evident that the majority actively selects contrasting settings depending on the context. The next sections display these individual preferences in more detail.

3.2. Unique Patterns Characterized by Program Changes

The user preferences are characterized by attenuation, noise reduction and brightness perception. Various coping strategies are observed in the program interactions. The following figures contain the average daily usage per hour, from 06:00 to 24:00; the average daily usage per hour in weekends, from 06:00 to 24:00; and an overview of the full experimental period, for one or more subjects.
The first observed coping strategy is based on alternating the brightness perception. By increasing the gain of mid to high frequencies, the perceptual brightness is increased. Subjects 1 and 5 both actively chose a more bright sounding program, either P2 (dark yellow) or P5 (red) to compensate for their hearing loss. They wished to increase speech intelligibility by perceptually adding more detail, both to speech and source localization. Both of these subjects used P2 and P5 20% of the total time, as observed in Figure 3. In Figure 4a,b, the average program usage in minutes per hour between 6:00 AM and 12:00 PM, is illustrated, for subjects 1 and 2. For both subjects, it seems that the brighter programs (P2 and P5) were used to complement P1 more often in the morning than in the rest of the day. Subject 1 furthermore used the directional program P4 in the evenings to complement P1. Interestingly, the need for added brightness depended on the day and time. This can be observed in Figure 4e,f, where a full overview of the programs over the test period is illustrated. The vertical axis represents weeks, the horizontal axis represents the time of weekdays, and the dashed line marks the intervention when programs were adjusted during the experiment. From these illustrationm it becomes visible that both subjects 1 and 5 actively chose P2 and P5 programs on weekdays, while the selection of these programs, as well as the overall usage of hearing aids, was reduced during the weekend. Both test subjects reported that programs P2 and P5 sounded either more “harsh, bright, or crisp”, depending on the context, but enhanced speech intelligibility and the overall intensity of the auditory environment.
Test subject 1 described the usage of the brighter sounding programs as follows:
“When I attend meetings, which I do a lot, I like to shift my attention between the participants in order to hear everyone in the room. Thus combining omnidirectionality with a more bright timbre. It may not sound as nice, or pleasant compared to my default preferences. However, it helps me understand what is being said. When the meeting ends, I usually change to another program.”
After an intervention, during which the programs were changed, both subjects 1 and 5 actively chose a brighter sounding program. The intervention added attenuation and noise reduction to a brighter sounding program (P5), while retaining the increased high-frequency gain. Despite this, the subjects preferred the brighter sound, indicating that brightness was what supported these subjects.

3.3. Alternating Between Omnidirectional and Frontal Focus

An alternative coping strategy is characterized by changing between an omnidirectional natural sound without noise removal, towards a frontal focused sound with increased noise reduction. This strategy was evident for subjects 2 and 3, as illustrated in Figure 5. Looking at the average usage per hour for subject 2 (Figure 5a) and subject 3 (Figure 5b), it can be seen that P1 was preferred, and the frontal directional program P4 was used to compliment P1 when the context changed. For subject 2, this was more evident between 7:00 AM and 8:00 AM, whereas subject 3 seemed to increasingly use the program from 8:00 AM, with a peak at midday, and then decreased the usage during the day, whereas P1 was increased throughout the day.
Test subject 2 used a coping strategy, with a directional program P2 between 7:00 AM and 4:00 PM in the first of the experiment before the intervention when the programs were adjusted. Coincidentally, at the same time, subject 2 retired from his job, which is reflected in the change of preferences defined by the frontal directional focus (dark red) on weekdays in the first half of the experiment. Subsequently, this behavioral pattern reappeared when he began working again part-time, resulting in sporadic usage of the same program towards the end of the experiment. Subject 2 described the behavioral pattern as follows: “When I drive I do not like the road noise, and noise in the bus. I prefer a program that attenuates these noises.” A similar pattern appeared for subject 3 after the interventions towards the end of the experiment, when the frontal focus with noise reduction was preferred on Mondays and Tuesdays. This augmentation of sound was displayed on some Fridays and Saturdays, suggesting a need for increased speech intelligibility. Subject 3 reported that the directional program “helps in noisy environment, such as a restaurant or a bar”.

3.4. Active and Habitual Users

Laplant-Levesque describes two types of users, which either wear the hearing aids from waking up until bedtime or on a more casual basis, only using the hearing aids driven by external demands [19]. Subject 1 had a unique behavioral pattern characterized by many interactions, constituted by both program changes, and on/off events. This test subject was working in different environments throughout the day, which was reflected in preferences for changing between brightness, attenuation, noise reduction or even “silence”, depending on the changing context. The subject reported back that “I wear the hearing aids when I have a need. For example, when I’m in a quiet office, I prefer not to wear them”. This pattern can be observed in Figure 4e and supports the findings from Laplante-Levesque. The user had a relatively low hourly usage of 13.74 min, as shown in Table 3. However, the detailed and fragmented illustration gives a level of detail not previously seen. Interestingly, both subjects 3 and 5 had a usage time per hour that was less than 20 min. These subjects did however seem to switch off the hearing aids for periods. In contrast, when turning on the devices, they used them for hours, without any off events.
Test subject 2 had a visually different coping strategy, remaining in the default omnidirectional program for extended periods and changing to a frontal noise reducing program when needed. It is interesting to see the adjustments being related to the dynamically changing context of work scenarios. Furthermore, subject 2 had the highest average usage time, with 27.7 minutes of use per hour. The amount of detail displays the need for assessing when and why a hearing aid is used as it is. The authors are not aware of similar findings in the literature, other than anecdotal findings from hearing care clinicians. This subject would be classified as a “habitual user”, without concern for the fine structures of program changes motivated by a changing context. This information is lost when averaging and aggregating data.

3.5. Alternating and Unique Patterns

The previous sections highlight the similarities and differences in various coping strategies. However, for several subjects, the coping strategy changed over time, for some, even radically. Subject 4 displayed an evident detour from the original behavioral pattern, as illustrated in Figure 6. Initially, subject 4 used both brightness and attenuation of noise to improve speech intelligibility in challenging listening situations. This subject primarily remained in the omnidirectional program for the first part of the experiment. After the intervention, this subject actively changed to using the frontal focused program as the default program. Only a few program changes to a similar program without noise reduction and the default omnidirectional program occurred. This suggests there is a need for continuous personalization, as user preferences might change over time. Furthermore, it indicates how a change in lifestyle, or context, may radically alter the needs of the user. Such changes in user needs are rarely addressed today because of the limited resources in hearing health care.

3.6. Weekdays versus Weekends

A significantly different behavioral pattern, for all subjects, can be observed in the difference between weekdays (Monday through Friday) and weekends (Saturday and Sunday). The average minutes of usage per hour for weekends is illustrated in the previous Figure 4c,d, Figure 5c,d and Figure 6b. The usage of the hearing aids was overall lower during weekends. All test subjects confirmed that lower usage in the weekend was due to a less demanding context. Several highlighted that “weekends are usually less challenging, both in regard to context and to mental work load”. This indicates that the environmental context in weekends provides, in general, fewer challenges than in weekdays. Furthermore, as a result of changes in activities, the need for increased support is lower in the weekends. Subjects 1, 3 and 5 all mentioned that they did not benefit as much from the hearing aids in weekends because of less demanding activities, the exception being when they attended a social event with competing speakers, and noisy environments with poor acoustics. This behavioral pattern was consistent over several months, indicating a reduced need for hearing devices during weekends. If the listening scenarios were perceived as less challenging during weekends, the resulting usage patterns could be interpreted as a baseline characterizing the minimum needs of the user. In contrast, weekdays were likely to represent more dynamic and challenging sound environments, causing the users to actively change programs dependent on the changing context.
In current hearing health care, these unique behavioral patterns cannot be addressed because of limited clinical resources. From the previous findings, we see the majority of the five test subjects actively used more than one program. They did this to increase the dynamical width of the experienced sound environment. At least two contrasting programs, such as P1 and P4, were needed to cover the needs of these test subjects.

3.7. Unique Behavioral Patterns over Weeks, and within Weeks

Subject 3 increased the volume of the omnidirectional program in the last third of the experiment, which may indicate an adaptation to the volume gain. Both subjects actively adjusted the volume gain in the omnidirectional program to increase speech intelligibility, as shown in Figure 7c. Subject 5 chose a different strategy on weekdays. This can be observed in Figure 4f, where additional selection of brightness, marked in two shades of orange, appears on Tuesdays. However, the volume was increased more in the omnidirectional program. Lastly, Subject 3 tended to use the frontal focused program on Monday and Tuesdays, while actively increasing the volume. This was in contrast to weekends, on which the default program was used with only few volume adjustments, as shown in Figure 5f and Figure 7c. This indicates two coping strategies: either choosing a program with more directional focus, or combining omnidirectional characteristics with a volume increase.
These behavioral patterns may indicate that some user actions are driven by recurring events, while others change dynamically over time.

3.8. Unique Patterns Characterized by Volume Change

Another interactive parameter is volume gain. Essentially, a non-linear amplification of soft sounds is applied across all frequency bands, rooted in a fitting rationale based on the user’s audiogram [15]. Adjusting the volume gain additionally provides the user with the opportunity to either zoom in or out, while keeping the desired noise attenuation or brightness preferences associated with the selected program parameters. Figure 7 displays the individual differences of volume interactions for the five test subjects. This figure illustrates the individual preferences for actively using the volume to complement or tune the current program used. Subject 1 (Figure 7a) and subject 2 (Figure 7b) had a limited use of the volume, indicating that the brightness and attenuation was sufficient. Both these subjects primarily used P1, where subject 1 used brighter programs around 20% of the time. In contrast, subject 3 (Figure 7c) subject 4 (Figure 7d) and subject 5 (Figure 7e) actively used the gain to adjust the current program. Subject 3 primarily used P1 and P4 and began increasing the volume after the intervention. Subject 4 primarily relied on P1 and P4. This subject actively used the volume in either program.

3.9. Number of Program and Volume Interactions

The number of interactions between the program and volume indicates whether a user prefers controlling the attenuation, noise reduction and brightness, or the overall gain of the device, where the volume ranges from −8 to 4. It should be noted that the devices reset the volume to 0 after a program change. The volume interactions can thus be interpreted as an indication of moving away from the default settings. If the program changes for a user account for the majority of interactions, there are few deviations from the default volume, and vice versa.
In Figure 8, the percentage of usage split between the number of program and volume changes is illustrated. This does not indicate the amount of volume steps, but instead, a discrete count of volume changes.
Three out of five subjects had a balanced split between program and volume interactions. This indicates that such adjustments are needed in order to augment the sound and thereby achieve the desired outcomes. For two subjects, there was a preference for using the program changes more frequently than the volume interactions. This was evident for subject 2, who had significantly fewer volume interactions.
Looking only at the aggregated and split number of interactions, it is evident that each user interacted with their hearing aids in unique ways. Some users perceptually benefitted from changing the attenuation, noise reduction and brightness, while others utilized the volume to further customize the default programs provided.

3.10. Volume Interactions With Respect to Programs

Volume interactions with respect to programs indicate how the hearing aids are used. Figure 9 illustrates volume changes over time, with respect to a program, before changing to another program. It is observed that volume interactions varied considerably across the test subjects. Subject 4 seemed to primarily decrease volume, and subject 3 seemed to primarily increase volume. These nuances would disappear if simply averaging volume over a longer period.
Interestingly, it is observed that all subjects lowered the volume of the default omnidirectional program (light yellow) from the beginning. However, if the subjects remained in the program, the volume was increased. For all subjects, the omnidirectiona focus of program P1, which amplifies any sounds within a 360 radius, may be perceived as louder. These illustrations show how users adapt to the increased gain, or intensity within minutes. As one subject phrases it: “P4 sounds round and nice. However, when you speak I’m not sure how much I benefit from this program. On the other hand, if I use P2 or another bright program, I understand more, but I need some time to adjust to the sound. Actually, I like the sound of P2”. This illustrates that programs with a rounder sound, P4–P6, with added attenuation, sound nice and round. However, the lack of added high frequency gain limits the ability to separate sources and lowers the contrast in consonant utterances. Today’s hearing aids modify only the overall gain, without taking such short-term adaptation of the perceived loudness into consideration.

4. Discussion

4.1. The Opportunity for Personalizing Hearing Health Care as hearing aids Become Internet of Things Devices

There is an urgent need to rethink how users can be empowered to become an active part of an individualized fitting process; WHO has warned that more than 1 billion young adults are at risk of hearing loss when listening to music at too high a level [35] and predicts that hearing loss will be the seventh highest cause of chronic diseases in 2030 [36]. Hearing loss is one of the most common sensory deficits and is more common than vision impairment [37], as it is estimated that one in four adults aged 45 years and older have hearing loss [36], out of which, a third have a disabling hearing loss (40 dB or more) [38]. These numbers stress the necessity for alternative approaches providing large-scale personalization of devices currently not feasible because of a lack of audiological resources. On an anecdotal note, an audiologist shared the following story regarding the challenge of personalizing hearing instruments:
“The hearing aid user comes in for a refitting in the middle of the week. I ask, ’Recall a situation where the hearing aids did not perform as you wanted it to’. The patient thinks, and comes up with, ’Well, yeah, I don’t remember that much, but Monday I had an episode.’
I then have to guess what is the essence of this episode, and try to refit the hearing aids to better accommodate similar situations in the future. However, I face several challenges. One is that the users rarely recall episodes, unless they are significant. If it’s a compliant user, they may be writing notes. The second happens only in rare cases. Furthermore, I have to guess what’s needed to be tuned to give a better experience. All of this is based on memory recall and heuristics”.
Establishing sufficiently accurate information about the situation and context, in this case to reconfigure the hearing aid, is not a unique problem in health care. Larsen et al. highlighted a similar problem when treating PTSD patients [26].

4.2. One Size Does Not Fit All

When enabling users to change between multiple settings as outlined in the present study, a first research question would be whether test subjects are willing to interact with their devices. From a limited set of users, we observe over several months that there appears to be an urge to actively change not only programs but also modify them by adjusting the volume. A caveat here is that the users in the present study were hearing impaired individuals who were highly motivated as test persons to improve their listening experience. Future studies would need to address to what extent broader segments of hearing-aid users would similarly wish to actively improve their listening experience.
From the pilot study presented in this paper, it is evident that users are not one-size-fits-all. The data indicates not just one but several unique behavioral patterns, defining “arch-typical” approaches to dynamically modify settings. We outline these as different strategies for coping in a changing context depending on cognitive state and effort related to multiple listening scenarios. The diversity of these interaction patterns are affected by the changing context. From only time, program and volume interactions, it becomes clear that various factors stimulate users to adjust, and thus personalize, their hearing aids to adapt to a given context. Here, the context may be summed up in behaviors related to the difference between weekdays, for which in many cases work-related activities represent external demands, in contrast to weekends, which might be characterized by leisure activities, defining a baseline in the general needs for augmenting listening scenarios. However, we also observe user interactions that might rather be related to the cognitive load experienced during the day, when selecting programs in the evening, offering attenuation of noise in order to rest the ears and brain. The diversity illustrated in the user interactions highlights the need for a personalized fitting process. Our findings indicate that there are multiple coping strategies involving not only noise reduction and volume, but also changing the timbre of the sound, when aiming to optimize the listening experience for each user.
Whether this results in improved speech intelligibility for the users or an overall better listening experience remains to be validated. Solely looking into the unique behavioral patterns, we observe individual coping strategies that seem to be preserved over days, weeks or even months.

4.3. Involvement and Engagement May Lead to a Higher Satisfaction

Empowering users to change settings related to both attenuation of noise and the timbre in terms of brightness, we observe consistent behavioral patterns suggesting that engaging with the hearing device creates an awareness about how to best cope in different sound environments. Future studies involving more users need to assess to what extent the ability to modify settings and volume translates into a significant improvement in hearing aid outcome measures defining the perceived user satisfaction.
Several of the users in the present study have hinted at this. One of our test participants said the following: “When I’m part of such an experiment, where I have to pay attention to when and how I can benefit the most from my hearing aids, it does affect how I use them. Even when a program which enhances brightness sounds harsher in some context, on the other hand it helps me understand speech. I wouldn’t have chosen such a program before the experiment, but would rather have stayed in a program which by default attenuates noise. Now I can better see the benefits of the different programs, in order to assess when one, or the other, would be most beneficial for me.” For the program with automatic noise reduction and attenuation engaged on the basis of acoustical characteristics, the test subjects reported that they had difficulties in hearing the perceptual difference, unless they chose the extremes of the spectrum.

4.4. The Next Steps to Create Better Hearing Experiences

While considered out of scope in the present study, we plan future experiments investigating how the observed user-initiated program and volume changes relate to the changing auditory context. That is, whether the sound pressure level, modulation characteristics and SNR describing how the devices perceive the changing sound environments correlate with user-initiated program or volume change. Alternatively, if the auditory context remains constant whereas the user interacts by changing the program or volume, it may rather reflect the user’s cognitive state related to the time of the day or fatigue; or, if apparently similar soundscapes do not always trigger the same user preferences in terms of program or volume changes, it may indicate that the activities are different: a similarly noisy environment occurring in a workout session or during an important meeting may trigger very different user interactions. Additional contextual parameters retrieved from smartphone motion data, calendar events or biometric sensors such as heart rate may need to be combined in order to describe both the sound environment and the corresponding user preferences.
Essentially, our aim is to investigate how to optimally learn intents from user-generated data and thereby predict contextual preferences on the basis of behavioral interaction patterns.
Overall, we wish to explore how active participation can improve the outcome measures constituting user satisfaction. Empowering the user to become an active part of the treatment is not limited to audiology but constitutes a central component when rethinking health care by involving patients, supported by IoT technologies and the ability to learn from user-generated data.
Optimizing the clinical workflow of hearing aid fitting by making the user an active part of the solution will have an impact for the clinicians, the next of kin and policymakers. What we see in the data of this pilot study, where users to a much higher extent than reported previously were able to cope by remaining in an omnidirectional setting without noise reduction, may reflect their ability to actively shift their attention, resulting in a corresponding attenuation of unwanted sounds in the auditory cortex. We listen with our ears, but understand using our brains. Empowering hearing impaired users to actively define their preferences could trigger a paradigm shift allowing for context-aware augmented hearing solutions, which dynamically adapt devices to the changing context by continuously learning from the user generated data.

Acknowledgments

This work is supported by the Technical University of Denmark, Copenhagen Center for Health Technology (CACHET) and the Oticon Foundation. We would like to thank Eriksholm Research Centre and Oticon A/S for providing hardware, access to test subjects, clinical approval and clinical resources. We would like to thank Anida Memic, Atefeh Hafez and Claus Nielsen for the clinical support.

Author Contributions

Benjamin Johansen and Michael Kai Petersen conceived the experimental setup and performed the experiments with subsequent follow-ups; Benjamin Johansen and Maciej Jan Korzepa performed the data analysis and visualizations; Benjamin Johansen wrote the paper. Michael Kai Petersen contributed to writing the paper. Jan Larsen, Niels Henrik Pontoppidan, Jakob Eg Larsen contributed in proof reading, giving input and supervision.

Conflicts of Interest

The authors have no conflict of interest related to funding; however, clinical resources and access to test subjects and hardware was provided by Oticon A/S.

Appendix A. Study data

References

  1. Archbold, S.; Lamb, B.; O’Neill, C.; Atkins, J. The Real Cost of Adult Hearing Loss; The Ear Foundation: Nottingham, UK, 2014; pp. 1–24. [Google Scholar]
  2. Livingston, G.; Sommerlad, A.; Orgeta, V.; Costafreda, S.G.; Huntley, J.; Ames, D.; Ballard, C.; Banerjee, S.; Burns, A.; Cohen-mansfield, J.; et al. Dementia prevention, intervention, and care. Lancet 2017. [Google Scholar] [CrossRef]
  3. Amieva, H.; Ouvrard, C.; Giulioli, C.; Meillon, C.; Rullier, L.; Dartigues, J.F. Self-reported hearing loss, hearing aids, and cognitive decline in elderly adults: A 25-year study. J. Am. Geriatr. Soc. 2015, 63, 2099–2104. [Google Scholar] [CrossRef] [PubMed]
  4. Ronnberg, J.; Rudner, M.; Lunner, T. Cognitive hearing science: The legacy of Stuart Gatehouse. Trends Amplif. 2011, 15, 140–148. [Google Scholar] [CrossRef] [PubMed]
  5. Lunner, T.; Rudner, M.; RÖnnberg, J. Cognition and hearing aids. Scand. J. Psychol. 2009, 50, 395–403. [Google Scholar] [CrossRef] [PubMed]
  6. Ng, E.H.N.; Rudner, M.; Lunner, T.; Pedersen, M.S.; Rönnberg, J. Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. Int. J. Audiol. 2013, 52, 433–441. [Google Scholar] [CrossRef] [PubMed]
  7. Davis, A.; Smith, P.; Ferguson, M.; Stephens, D.; Gianopoulos, I. Acceptability, benefit and costs of early screening for hearing disability: A study of potential screening tests and models. Health Technol. Assess. Southampt. 2007, 11, 1–294. [Google Scholar] [CrossRef]
  8. Hartley, D.; Rochtchina, E.; Newall, P.; Golding, M.; Mitchell, P. Use of hearing aids and assistive listening devices in an older Australian population. J. Am. Acad. Audiol. 2010, 21, 642–653. [Google Scholar] [CrossRef] [PubMed]
  9. McCormack, A.; Fortnum, H. Why do people fitted with hearing aids not wear them? Int. J. Audiol. 2013, 52, 360–368. [Google Scholar] [CrossRef] [PubMed]
  10. Arlinger, S.; Nordqvist, P.; Öberg, M. International outcome inventory for hearing aids: Data from a large Swedish quality register database. Am. J. Audiol. 2017, 26, 443–450. [Google Scholar] [CrossRef] [PubMed]
  11. Humes, L.E.; Rogers, S.E.; Quigley, T.M.; Main, A.K.; Kinney, D.L.; Herring, C. The effects of service-delivery model and purchase price on hearing-aid outcomes in older adults: A randomized double-blind placebo-controlled clinical trial. Am. J. Audiol. 2017, 26, 53–79. [Google Scholar] [CrossRef] [PubMed]
  12. Killion, M.C. New thinking on hearing in noise: A generalized articulation index. Semin. Hear. 2002. [Google Scholar] [CrossRef]
  13. Wendt, D.; Hietkamp, R.K.; Lunner, T. Impact of noise and noise reduction on processing effort: A pupillometry study. Ear Hear. 2017, 38, 690–700. [Google Scholar] [CrossRef] [PubMed]
  14. Marozeau, J.; Florentine, M. Loudness growth in individual listeners with hearing losses: A review. J. Acoust. Soc. Am. 2007, 122, EL81–EL87. [Google Scholar] [CrossRef] [PubMed]
  15. Le Goff, N. Amplifying Soft Sounds—A Personal Matter; Technical Report; Oticon: Smorum, Denmark, 2015. [Google Scholar]
  16. Le Goff, N.; Jensen, J.; Pedersen, M.S.; Callaway, S.L. An Introduction to OpenSound Navigator™; Oticon: Smorum, Denmark, 2016; pp. 1–9. [Google Scholar]
  17. Dahl, Y.; Hanssen, G.K. Breaking the sound barrier: Designing for patient participation in audiological consultations. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 3079–3090. [Google Scholar]
  18. Dahl, Y.; Linander, H.; Hanssen, G.K. Co-designing interactive tabletop solutions for active patient involvement in audiological consultations. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational, Helsinki, Finland, 26–30 October 2014; pp. 207–216. [Google Scholar]
  19. Laplante-Lévesque, A.; Nielsen, C.; Jensen, L.D.; Naylor, G. Patterns of hearing aid usage predict hearing aid use amount (data logged and self-reported) and overreport. J. Am. Acad. Audiol. 2014, 25, 187–198. [Google Scholar] [CrossRef] [PubMed]
  20. Aldaz, G.; Puria, S.; Leifer, L.J. Smartphone-based system for learning and inferring hearing aid settings. J. Am. Acad. Audiol. 2016, 27, 732–749. [Google Scholar] [CrossRef] [PubMed]
  21. Nielsen, J.B.B.; Nielsen, J.; Larsen, J. Perception-based personalization of hearing aids using gaussian processes and active learning. IEEE/ACM Trans. Speech Lang. Process. 2015, 23, 162–173. [Google Scholar] [CrossRef]
  22. Nielsen, J.B. Systems for Personalization of Hearing Instruments: A Machine Learning Approach. Ph.D. Thesis, DTU Compute, Lyngby, Denmark, 2015. [Google Scholar]
  23. Epstein, D.A.; Lee, N.B.; Kang, J.H.; Agapie, E.; Schroeder, J.; Pina, L.R.; Fogarty, J.; Kientz, J.A.; Munson, S. Examining menstrual tracking to inform the design of personal informatics tools. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 6876–6888. [Google Scholar]
  24. Saeb, S.; Zhang, M.; Karr, C.J.; Schueller, S.M.; Corden, M.E.; Kording, K.P.; Mohr, D.C. Mobile phone sensor correlates of depressive symptom severity in daily-life behavior: An exploratory study. J. Med. Internet Res. 2015, 17, e175. [Google Scholar] [CrossRef] [PubMed]
  25. Wang, R.; Chen, F.; Chen, Z.; Li, T.; Harari, G.; Tignor, S.; Zhou, X.; Ben-Zeev, D.; Campbell, A.T. StudentLife: Assessing mental health, academic performance and behavioral trends of college students using smartphones. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Seattle, WA, USA, 13–17 September 2014; pp. 3–14. [Google Scholar]
  26. Larsen, J.E.; Eskelund, K.; Christiansen, T.B. Active self-tracking of subjective experience with a one-button wearable: A case study in military PTSD. arXiv, 2017; arXiv:1703.03437. [Google Scholar]
  27. Cuttone, A.; Bækgaard, P.; Sekara, V.; Jonsson, H.; Larsen, J.E.; Lehmann, S. Sensiblesleep: A bayesian model for learning sleep patterns from smartphone events. PLoS ONE 2017, 12, e0169901. [Google Scholar] [CrossRef] [PubMed]
  28. Mamykina, L.; Mynatt, E.D.; Davidson, P.R.; Greenblatt, D. MAHI: Investigation of social scaffolding for reflective thinking in diabetes management. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence, Italy, 5–10 April 2008; pp. 477–486. [Google Scholar]
  29. Oticon. Opn hearing aid offers ‘open sound’ experience. Hear. J. 2016, 69, 44. [Google Scholar]
  30. World Health Organization. Grades of Hearing Impairment; World Health Organization: Geneva, Switzerland, 2011. [Google Scholar]
  31. Elhilali, M. Modeling the cocktail party problem. In The Auditory System at the Cocktail Party; Springer: New York, NY, USA, 2017; pp. 111–135. [Google Scholar]
  32. Litovsky, R.Y.; Goupell, M.J.; Misurelli, S.M.; Kan, A. Hearing with Cochlear Implants and Hearing Aids in Complex Auditory Scenes. In The Auditory System at the Cocktail Party; Springer: New York, NY, USA, 2017; pp. 261–291. [Google Scholar]
  33. Pichora-Fuller, M.K.; Alain, C.; Schneider, B.A. Older Adults at the Cocktail Party. In The Auditory System at the Cocktail Party; Springer: New York, NY, USA, 2017; pp. 227–259. [Google Scholar]
  34. Walden, B.E.; Cord, M.T.; Dyrlund, O. Predicting hearing aid microphone preference in everyday listening. J. Am. Acad. Audiol. 2011, 396, 365–396. [Google Scholar] [CrossRef]
  35. World Health Organization; Smikey, L. 1.1 Billion People at Risk of Hearing Loss; World Health Organization: Geneva, Switzerland, 2015. [Google Scholar]
  36. Mathers, C. The Global Burden of Disease: 2004 Update; World Health Organization: Geneva, Switzerland, 2008. [Google Scholar]
  37. Dillon, C.F.; Gu, Q.; Hoffman, H.J.; Ko, C.W. Vision, hearing, balance, and sensory impairment in Americans aged 70 years and over: United States, 1999–2006. NCHS Data Brief 2010, 31, 1–8. [Google Scholar]
  38. World Health Organization. WHO Global Estimates on Prevalence of Hearing Loss; World Health Organization: Geneva, Switzerland, 2012. [Google Scholar]
Figure 1. Six programs were used over the period of 9 months. The horizontal represents the amount of attenuation applied, from natural dampening only based on the shape of the head on the left side, to maximum attenuation of ambient sounds on the right-hand side. The vertical represents the amount of noise reduction, ranging from no noise reduction at the bottom, to maximum removal of diffuse noise at the top. The colors represent the brightness of the sound, from dark blue hues, indicating crisp and bright sound produced by greater amplification in high frequencies, to orange hues, indicating a soft and round sound, caused by less amplification in the mid and high frequencies.
Figure 1. Six programs were used over the period of 9 months. The horizontal represents the amount of attenuation applied, from natural dampening only based on the shape of the head on the left side, to maximum attenuation of ambient sounds on the right-hand side. The vertical represents the amount of noise reduction, ranging from no noise reduction at the bottom, to maximum removal of diffuse noise at the top. The colors represent the brightness of the sound, from dark blue hues, indicating crisp and bright sound produced by greater amplification in high frequencies, to orange hues, indicating a soft and round sound, caused by less amplification in the mid and high frequencies.
Computers 07 00001 g001
Figure 2. Graphic illustration of the two test periods, run in the fall of 2016 and spring of 2017. The programs used from September to January were P1–P4, while from February to June, the programs used were P1 and P4–P6.
Figure 2. Graphic illustration of the two test periods, run in the fall of 2016 and spring of 2017. The programs used from September to January were P1–P4, while from February to June, the programs used were P1 and P4–P6.
Computers 07 00001 g002
Figure 3. Percentage-wise distribution of programs throughout the entire experimental period. P1 is yellow, P2 is dark yellow, P3 is brown, P4 is orange, P5 is red and P6 is maroon. We note that a user such as subject 2 relies primarily on one program, in contrast to the more diversified program usage of subject 4.
Figure 3. Percentage-wise distribution of programs throughout the entire experimental period. P1 is yellow, P2 is dark yellow, P3 is brown, P4 is orange, P5 is red and P6 is maroon. We note that a user such as subject 2 relies primarily on one program, in contrast to the more diversified program usage of subject 4.
Computers 07 00001 g003
Figure 4. Behavior characterized by preference for switching between omnidirectionality without noise reduction and using more gain in high frequencies, termed brightness. Subjects 1 and 5 appeared to actively use brightness to improve speech intelligibility in challenging listening situations. This is seen in the active choice of selecting the programs, marked as dark yellow above the dashed line in the first half of the experiment, and marked as orange below the dashed line in second experiment. (a) Subject 1, average daily program usage; (b) subject 5, average daily program usage; (c) subject 1, average daily program usage in weekends; (d) subject 5, average daily program usage in weekends; (e) subject 1, detailed program usage; (f) subject 5, detailed program usage.
Figure 4. Behavior characterized by preference for switching between omnidirectionality without noise reduction and using more gain in high frequencies, termed brightness. Subjects 1 and 5 appeared to actively use brightness to improve speech intelligibility in challenging listening situations. This is seen in the active choice of selecting the programs, marked as dark yellow above the dashed line in the first half of the experiment, and marked as orange below the dashed line in second experiment. (a) Subject 1, average daily program usage; (b) subject 5, average daily program usage; (c) subject 1, average daily program usage in weekends; (d) subject 5, average daily program usage in weekends; (e) subject 1, detailed program usage; (f) subject 5, detailed program usage.
Computers 07 00001 g004aComputers 07 00001 g004b
Figure 5. Behavior characterized by preference for switching between omnidirectionality without noise reduction versus frontal focus with noise reduction. Subjects 2 and 3 appeared to actively attenuate noise to improve speech intelligibility in challenging listening situations. This is seen in the active choice of selecting the programs, marked as dark red (P4) above the dashed line, and bright red (P5) below the line in second experiment. (a) Subject 2, average daily program usage; (b) subject 3, average daily program usage; (c) subject 2, average daily program usage in weekends; (d) subject 3, average daily program usage in weekends; (e) subject 2 detailed program usage; (f) subject 3 detailed program usage.
Figure 5. Behavior characterized by preference for switching between omnidirectionality without noise reduction versus frontal focus with noise reduction. Subjects 2 and 3 appeared to actively attenuate noise to improve speech intelligibility in challenging listening situations. This is seen in the active choice of selecting the programs, marked as dark red (P4) above the dashed line, and bright red (P5) below the line in second experiment. (a) Subject 2, average daily program usage; (b) subject 3, average daily program usage; (c) subject 2, average daily program usage in weekends; (d) subject 3, average daily program usage in weekends; (e) subject 2 detailed program usage; (f) subject 3 detailed program usage.
Computers 07 00001 g005
Figure 6. Behavior characterized by switching between omnidirectionality, brightness and frontal focus with noise reduction. Subject 4 initially actively used both brightness and attenuation of noise to improve speech intelligibility in challenging listening situations, while later primarily preferring a frontal focus combined with noise reduction. This is seen in the active choice of selecting the programs, marked as yellow and brown above the line in first experiment and bright red and dark red below the line in second experiment. (a) Subject 4, average daily program usage; (b) subject 4, average daily program usage in weekends; (c) subject 4, detailed program usage.
Figure 6. Behavior characterized by switching between omnidirectionality, brightness and frontal focus with noise reduction. Subject 4 initially actively used both brightness and attenuation of noise to improve speech intelligibility in challenging listening situations, while later primarily preferring a frontal focus combined with noise reduction. This is seen in the active choice of selecting the programs, marked as yellow and brown above the line in first experiment and bright red and dark red below the line in second experiment. (a) Subject 4, average daily program usage; (b) subject 4, average daily program usage in weekends; (c) subject 4, detailed program usage.
Computers 07 00001 g006
Figure 7. Volume interactions over the full experimental period. The red colors are volume increase up to +4, and blue colors are volume decrease down to −4. (a) Subject 1; (b) subject 2; (c) subject 3; (d) subject 4; (e) subject 5.
Figure 7. Volume interactions over the full experimental period. The red colors are volume increase up to +4, and blue colors are volume decrease down to −4. (a) Subject 1; (b) subject 2; (c) subject 3; (d) subject 4; (e) subject 5.
Computers 07 00001 g007aComputers 07 00001 g007b
Figure 8. Percentage-wise distribution of total interactions between program and volume interactions. Subjects 1, 3 and 4 had close to an even split between volume and program. This indicates that both volume and programs were used to augment the sound environment. Subject 2 had a significantly lower number of volume interactions, compared to the rest of the subjects. This however, indicates a preference for using the programs, rather than volume, to augment sound.
Figure 8. Percentage-wise distribution of total interactions between program and volume interactions. Subjects 1, 3 and 4 had close to an even split between volume and program. This indicates that both volume and programs were used to augment the sound environment. Subject 2 had a significantly lower number of volume interactions, compared to the rest of the subjects. This however, indicates a preference for using the programs, rather than volume, to augment sound.
Computers 07 00001 g008
Figure 9. Volume with respect to program. Programs with less than 20 interactions have been excluded. On average, the volume gain for P1 (light yellow) was initially reduced and over time increased across all test subjects. This suggests that the omnidirectional characteristics and lack of noise reduction were initially perceived as being too intense, in turn triggering that the subjects decrease the volume. As the subjects adapted to the perceived loudness over time, the general trend was to increase the volume again. (a) Subject 1 coped by increasing volume in the brightest program (P2, yellow). This may indicate a need for more presence, and more amplification in high frequencies, in order to improve speech intelligibility; (b) Subject 2 coped by actively using volume to zoom in and out. This subject primarily used the default program P1; (c) Subject 3 coped by initially lowering the volume in P1 and over time increasing the volume again, when adapting to the intensity. The increase in volume gain seen in programs with noise reduction may suggest a need to zoom in to compensate for a perceived lack of intensity; (d) Subject 5 actively used programs and reduces the intensity of sound by lowering the volume in P1; (e) Subject 4 preferred to reduce volume in the selected programs to reduce the presence. This preference seems also reflected in the actively chosen programs, which provide more attenuation of non-voice directional sources and removal of diffuse noise.
Figure 9. Volume with respect to program. Programs with less than 20 interactions have been excluded. On average, the volume gain for P1 (light yellow) was initially reduced and over time increased across all test subjects. This suggests that the omnidirectional characteristics and lack of noise reduction were initially perceived as being too intense, in turn triggering that the subjects decrease the volume. As the subjects adapted to the perceived loudness over time, the general trend was to increase the volume again. (a) Subject 1 coped by increasing volume in the brightest program (P2, yellow). This may indicate a need for more presence, and more amplification in high frequencies, in order to improve speech intelligibility; (b) Subject 2 coped by actively using volume to zoom in and out. This subject primarily used the default program P1; (c) Subject 3 coped by initially lowering the volume in P1 and over time increasing the volume again, when adapting to the intensity. The increase in volume gain seen in programs with noise reduction may suggest a need to zoom in to compensate for a perceived lack of intensity; (d) Subject 5 actively used programs and reduces the intensity of sound by lowering the volume in P1; (e) Subject 4 preferred to reduce volume in the selected programs to reduce the presence. This preference seems also reflected in the actively chosen programs, which provide more attenuation of non-voice directional sources and removal of diffuse noise.
Computers 07 00001 g009
Table 1. Demographic information related to six subjects.
Table 1. Demographic information related to six subjects.
SubjectAge GroupHearing LossExperience with OPNOccupation
158Moderate–severeNoWorking
276ModerateNoPart-time work
365ModerateNoWorking
475Mild–moderateNoRetired
554MildYesWorking
649Mild–moderateNoWorking
Table 2. Total usage time for all six programs in hours, for five test subjects.
Table 2. Total usage time for all six programs in hours, for five test subjects.
Subject 1Subject 2Subject 3Subject 4Subject 5
Total usage time (h)486.251189.90255.78373.32551.62
Table 3. Average usage in minutes per hour, for five test subjects.
Table 3. Average usage in minutes per hour, for five test subjects.
P1P2P3P4P5P6Average Per Hour
Subject 19.711.410.111.141.070.3113.74
Subject 225.480.000.011.380.140.6627.67
Subject 39.280.000.002.460.300.0612.10
Subject 46.700.692.868.980.150.8420.23
Subject 512.651.640.210.781.180.0516.51

Share and Cite

MDPI and ACS Style

Johansen, B.; Petersen, M.K.; Korzepa, M.J.; Larsen, J.; Pontoppidan, N.H.; Larsen, J.E. Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data. Computers 2018, 7, 1. https://doi.org/10.3390/computers7010001

AMA Style

Johansen B, Petersen MK, Korzepa MJ, Larsen J, Pontoppidan NH, Larsen JE. Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data. Computers. 2018; 7(1):1. https://doi.org/10.3390/computers7010001

Chicago/Turabian Style

Johansen, Benjamin, Michael Kai Petersen, Maciej Jan Korzepa, Jan Larsen, Niels Henrik Pontoppidan, and Jakob Eg Larsen. 2018. "Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data" Computers 7, no. 1: 1. https://doi.org/10.3390/computers7010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop