Next Article in Journal
Cyclic Block Copolymer Microchannel Fabrication and Sealing for Microfluidics Applications
Next Article in Special Issue
Direct Assessment of Alcohol Consumption in Mental State Using Brain Computer Interfaces and Grammatical Evolution
Previous Article in Journal
An Improved Control Strategy for Three-Phase Power Inverters in Islanded AC Microgrids
Previous Article in Special Issue
Personalized UV Radiation Risk Monitoring Using Wearable Devices and Fuzzy Modeling
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Survey on the Affordances of “Hearables”

by
Joseph Plazak
1 and
Marta Kersten-Oertel
1,2,*
1
Department of Computer Science & Software Engineering, Concordia University, 1455 Boulevard de Maisonneuve O, Montreal, QC H3G 1M8, Canada
2
Perform Centre, Concordia University, 1455 Boulevard de Maisonneuve O, Montreal, QC H3G 1M8, Canada
*
Author to whom correspondence should be addressed.
Inventions 2018, 3(3), 48; https://doi.org/10.3390/inventions3030048
Submission received: 29 May 2018 / Revised: 9 July 2018 / Accepted: 10 July 2018 / Published: 14 July 2018
(This article belongs to the Special Issue Frontiers in Wearable Devices)

Abstract

:
Recent developments pertaining to ear-mounted wearable computer interfaces (i.e., “hearables”) offer a number of distinct affordances over other wearable devices in ambient and ubiquitous computing systems. This paper provides a survey of hearables and the possibilities that they offer as computer interfaces. Thereafter, these affordances are examined with respect to other wearable interfaces. Finally, several historical trends are noted within this domain, and multiple paths for future development are offered.

1. Introduction

One of the goals of ubiquitous computing is that good technology will work seamlessly into our lives and workflows in such a way that we fail to even notice that it exists [1]. One major advancement towards this particular aim has been the advent and proliferation of wearable computing, in which computing devices are designed to be worn by the user for prolonged periods of time [2]. The large group of devices that fit within the realm of wearable computing are often referred to as “wearables”, and include small and large fitness trackers (pedometers, accelerometers), smart watches, smart eyewear, and many other types of devices. In this paper, we examine a specific class of wearable devices that is mounted or attached to the ear(s); a class of wearables frequently referred to as “hearables” [3].
Nick Hunn has presented two working definitions of “hearables” the first, in 2014 [3], as any hearing device which included wireless connectivity, and later in 2016 [4], as “anything that fits in or on an ear that contains a wireless link, whether that’s for audio, or remote control of audio augmentation”. Thus, hearables represent a wide variety of devices, ranging from wireless audio headphones to smart hearing aids. While hearables are primarily designed for the presentation and recording of audio signals, there are many other possibilities for interaction, and these affordances are explored below. In line with both Gibson [5] and Gaver [6], we use the term “affordances” to refer to attributes of an object that facilitate certain actions in a way that is independent (yet related) to the perception of the user. The affordances discussed below are general in nature, with each feature offering a range of uses and applications. The concept of affordances allows us to not only trace the history of ear-mounted devices and demonstrate how they have evolved, but also to provide insight into which affordances may provide the most utility for future applications.
Hearables offer a number of unique computer-interface affordances, particularly due to the perceptual proclivities of human hearing. As opposed to the eyes, which may attend to only one view at a time, the ears are capable of decomposing complex auditory scenes [7] and selectively attending to certain sources, a phenomenon referred to as the cocktail party problem [8]. In this specific regard, hearables have great potential to “blend” into our everyday environments by focusing or ignoring incoming information, all while leaving the eyes free to attend to other tasks. The “eyes-free” feature of hearables is therefore one of their strongest affordances. Further, hearables are also referred to as “hands-free” devices, especially within the context of driving applications. This “eyes-and-hands-free” affordance of hearables ensures that they do not impede our abilities to visually or tactilely interact with the world, thus offering unique possibilities as computer interfaces.
While hearables have a number of affordances for ubiquitous computing, many of these affordances are particularly relevant for affective computing, in which computers have the ability “to recognize, express, and “have” emotions” [9]. Hearables are particularly useful for recording affective data input, as the ear is a rich source for collecting many types of biological signals pertaining to our emotional states; ear-mounted sensors can potentially monitor cardiovascular, respiratory, perspiration, and brain states [10,11]. This type of data has been utilized in a variety of different applications, including fitness tracking, health monitoring, improved computer interfaces, and various context-aware/“smart” systems. Traditionally, each type of application utilizes only a handful of the potential affordances offered by hearables.
The aim of this paper is to bring awareness to the current and proposed affordances of hearables. It begins by tracing the evolution of headphones towards present-day hearables. Next, the affordances of hearables for ubiquitous computing systems are detailed, including those that have been exploited and several that remain theoretically possible. Thereafter, a comparison of hearables is made against other wearables. Finally, current and future trends are examined within existing and proposed hearables, and conclusions are drawn regarding the most immediate implications of these findings.

2. From Headphones to Hearables

In order to situate the affordances of hearables within the realm of ubiquitous computing, it is helpful to consider how these devices have evolved. In addition to the timeline information presented in Figure 1, the following section highlights several of the main developments within the history of hearables.

2.1. Headphones

The precursor to audio headphones is often considered to be the medical stethoscope, a tool designed to isolate biological sound information [12]. Early hearing aids, some of which were purely mechanical devices, have also been considered to be precursors to audio headphones [13]. The early history of audio headphones can largely be situated within the radio and telecom industries [14]. First generation radio operators working in close physical proximity to one another required isolated presentations of audio information, and thus a number of headphone designs appeared between 1900 and 1920 [14,15]. Headphones not only facilitated isolated audio presentations, but by the mid-1920s, they were also employed for personal enjoyment to create a continuous zone of personalized space while listening to radio programming or phonograph records [14,16].
Headphones have evolved in many ways since the 1920s, and features such as sound quality, design, and cost have made vast improvements. Whereas early teleoperator headsets were designed to optimize speech perception, today’s headphones are typically optimized for music and multimedia. Prior to the rise of mobile audio devices in the 1980s, headphones for personal listening could afford to be heavy or awkward looking, as their use was constrained to a tethered sound source [16]. Thereafter, the visual appearance of headphones became more important, not only for facilitating a number of application specific designs (such as aviation headsets, disc jockey headsets, etc.) but also for projecting fashionability and social-status [17].
Summary: Headphones afford the isolated presentation of audio, which has evolved from an affordance of private listening to an affordance of defining private space.

2.2. Headsets

The terms headphones and headsets are sometimes used interchangeably, yet the term headset generally refers to the combination of headphones and a microphone. While the evolutionary history of both headphones and headsets are similar, there are a few affordances that deserve to be mentioned separately.
Whereas headphones are strictly a device for one-way audio presentation, headsets have the ability to both send and receive audio information. This two-way dimension of communication was of obvious importance to the early radio and telecom industry. Despite the existence and feasibility of early headsets, there were few commercial applications in which the simultaneous sending and receiving of audio information had to be isolated within a given environment. Even as audio technology moved towards becoming more mobile (transistor radios, Walkman, mp3 players, etc.), few applications required a device that combined headphones and a microphone. Several exceptions included voice-controlled computer systems and voiceover IP programs [18].
Summary: Headsets afford two-way sound communications between a person and a device, one of the most natural forms of interaction.
The present use of both headphones and headsets are ubiquitous in daily life; a recent survey of technology carried by consumers in the USA found that headphones were the second most commonly carried technology (after a smartphone) [19]. The idea of a person wearing headphones in public places hardly seems out of place, as does the sight of someone apparently talking to themselves while wearing a headset.

2.3. Hearables

The move from wired to wireless headphones and headsets marks the potential “arrival” of hearables, at least according to Hunn’s operational definition [3,4]. While the ability to send and receive audio signals wirelessly has been around since the early days of radio, the ability for a headset to wirelessly communicate with a computer is a relatively new technology. Wireless communication via Bluetooth technology has been a major breakthrough for wearables in general, as it affords the ability for data to be captured or sent to an independent location in order to be processed (i.e., a smartphone, cloud server, etc.). Notably, Nokia’s first Bluetooth Headset in 2003 (HDW-2), which linked a user’s cell phone to a monaural headset, demonstrated the power of this new technology, and might be considered the first true “hearable.” Despite its current widespread use, Bluetooth communication between devices is not without limitations, as several connectivity and security issues remain unresolved. However, Bluetooth currently serves as the core technology for making wireless headsets (and thus, hearables) possible.
Summary: Wireless headsets (hearables) afford untethered audio communication between users and devices, allowing this technology to disappear into the background of our lives.

3. Affordances of Hearables: Ear-Mounted Sensors

Hearables offer a number of affordances for user interaction and data input. Below, the various data inputs of hearables are explored, ranging in complexity from simple tactile buttons to sophisticated cloth electrodes capable of simultaneously recording multiple biological signals.

3.1. Tactile Input

Before audio technology entered its “age of mobility,” all of the necessary device controls, such as buttons, knobs, switches, and dials, could be integrated within the audio (i.e., source) device. However, as audio devices started to miniaturize, the need for “remote” control of the devices became desirable. As an example, Apple’s first iteration of earbuds (released with the iPod in 2001) were merely a pair of headphones; tactile remote-control buttons were not added until the release of the third generation of the iPod shuffle (2009)—a much smaller device. Remote pushbutton controls (i.e., those not appearing on the source device) began to appear alongside digital audio mediums and digital audio systems, such as the compact disc and minidisc. Early mini-discs not only had volume and track controls built into the headphone jacks, but several models also had remote displays which could present textual audio metadata.
Despite its simplicity, tactile control input will likely play an important role in the future evolution of hearables, even as these devices continue to shrink in size. Certain tasks, such as volume adjustment, are well suited to control via tactile input (as opposed to voice control or other methods). Further, tactile pushbuttons, while familiar and available on many devices, could potentially be replaced by small capacitive touchscreen interfaces, similar to Bragi’s first generation hearable named “The Dash”, which facilitates control via multiple taps and directional swipes [20].
Summary: Tactile controls afford the ability to trigger and control computer interactions.

3.2. User Motion Input

A key affordance that hearables can provide for ubiquitous systems is user location and motion data. As Weiser stated, “If a computer merely knows what room it is in, it can adapt its behavior in significant ways without requiring even a hint of artificial intelligence” [1]. Therefore, obtaining accelerometer, compass, and location data (GPS) directly from hearables opens a number of possibilities for natural user interfaces [21].

3.2.1. Accelerometer

Just as touch gestures (swipes, pinches, rotate, etc.) created new possibilities for user interface (UI) design, head gestures obtained from hearables offer similar potential. Head gestures need not be complicated; even simple yes (forward-backward) and no (side to side) gestures, such as those found on the Bragi Dash, have been a great first step towards improved natural user interfaces. The possibilities could go much further to include static head position information, which has been found to be useful for recognizing certain types of emotional states [22]. Accelerometer data can also be used in other applications, including sensing small internal vibrations for differentiating user noise from external noise, and tracking user’s steps or other types of health-related movement.

3.2.2. Compass

The technological affordances offered by placing a digital compass within a headset have implications for audio augmented reality (AAR) applications. AAR involves presenting users with context-specific audio information, often for highlighting local features within a museum or other place of public interest [23,24]. Research has found that the placement of a digital compass on the head, as opposed to other locations on the body, results in the best subjective ratings of realism in AAR situations [25], largely because ear-mounted compasses remain in fixed position to the eyes. For the same reason, ear-mounted compass information also offers important affordances for improving heading information within audio-based navigation applications.

3.2.3. GPS

GPS receivers embedded in hearables offer the ability to track user movement on a larger scale than accelerometer measurements. Such information has been employed within other wearable devices for fitness tracking, as well as navigational guidance. One example of a hearable utilizing GPS technology is the Sony Smart B-Trainer, a smart headset featuring a number of different sensor inputs. Positioning a GPS receiver or a GPS flexible antenna [26] on the head, as opposed to within a smartphone, may potentially allow for improved signal reception. Despite these affordances, GPS data is still limited to areas where satellite signals can be obtained, thus creating limitations for indoor applications.
Summary: Ear-mounted user-motion sensors afford the ability to monitor user state and user location, thus offering alternatives to computer vision-based solutions.

3.3. Biosignal Inputs

Ear-mounted biosensors that directly record user’s biological information offer a number of possibilities for affect-aware computing [27]. The affordances of several different types of sensor are covered below. In general, all of these biosensors allow users to monitor and visualize various aspects of their current state. Beyond merely recording and/or displaying data, a number of possibilities exist for utilizing bio-data in various applications, such as triggering alerts, providing real-time feedback to alter/synchronize with other bio-signals, as well as guiding “smart-curation” data presentation systems.

3.3.1. Photoplethsmography (PPG)

Reflective photoplethsmography (PPG) sensors have appeared in a number of different wearable devices, including various health trackers, smartwatches, and hearables [10,28,29], allowing a relatively inexpensive and non-invasive method for obtaining data pertaining to the user’s heart rate and heart rate variability. The underlying technology for obtaining this type of heart rate and other blood related activity is remarkably simple, and therefore some wearable devices on the market even contain multiple PPG sensors for redundancy. Using a paired light source and photosensitive measuring component, PPG makes it possible to measure changes in blood flow, as oxygenated and deoxygenated blood reflect light differently [29]. The ear has a number of affordances compared to other PPG measurement locations, as the sensor can be naturally protected and secured within the ear canal, thus making measurements less susceptible to motion and external light artifacts.
With specific regards to affective affordances, large-scale heart rate data may be useful for identifying general levels of user activity/restfulness. Further, smaller scale heart rate variability measurements can be useful for measuring changing mental states, stress responses, and potentially monitoring a number of clinical diagnosis markers across many different medical specialties [30]. Optical measurements of heart-related data are quite different from electrical measurements, and therefore, the use of PPG for certain clinical applications (such as monitoring abnormal heart rhythms) may not be possible [31]. Beyond heart-specific information, PPG is also used in applications that monitor blood-oxygen levels [32]. Within the specific context of in-ear measurements, real-time blood oxygen monitoring and alerting could be useful in a number of high-altitude applications (aviation, mountain-related sporting, etc.), especially considering in-ear measurements could be obtained hands-free.

3.3.2. Electrophysiology

There are several types of bio signals that are obtained from differential electrophysiology, including signals pertaining to brain state (EEG) muscle activity (EMG), and eye activity (EOG), amongst others. The apparatuses for collecting these signals involve a minimum of three continuous measurements, a ground reference, as well as a positive and a negative signal. Remarkably, research has demonstrated that not only can differential electrophysiology measurements be obtained from the ear canal [33], but that simple single channel measurements can potentially be decomposed to reveal multiple underlying signal types [34].
While there are vast amounts of literature on electroencephalography (EEG) recordings, here we focus on only a few affordances that might be gained from relatively small ear-mounted systems. Existing systems have been designed to collect EEG data from around [35] or within the ear [33]. In particular, single-channel EEG data, when paired with appropriate signal processing techniques, affords the inference (or classification) of coarse patterns of mental activity, such as alertness, drowsiness, concentration, relaxation, sleep patterns, active thinking, etc. Monitoring such states continuously over long periods of time, provided the signals are clear enough, could provide powerful monitoring solutions for affect-aware applications, as well as a number of other critical applications involving human alertness.
EEG data has already been used within Brain-Computer Interfaces (BCI) for years in a number of applications like wheelchair control, Augmentative and Alternative Communication (AAC) systems, drive-by-wire automobiles, etc. It remains to be seen if the relatively simple EEG measurements obtained from hearables will be capable of serving in similar applications. While many of these systems have been focused on applications for the disabled, able-bodied persons will also be able to benefit from similar applications [36]. Further, single channel EEG measurements can also be used to monitor a user’s general sleep patterns [11].
Electromyography (EMG) captures electrical signals stemming from muscle activity, and ear-mounted systems can not only record ear muscle gestures (such as an ear wiggle), but also jaw, mouth, nose, and eyebrow gestures [37]. EMG has recently been implemented as a computer control device in non-ear mounted solutions, such as EMG bracelets [38]. One of the real affordances of EMG signals is the ability to associate the contraction of certain muscles with various computer control [36], such as flexing the wrist upwards in order to turn on one’s mobile device. Within hearables, EMG signals could likewise be used to trigger various computer functions, or silently respond to computer generated prompts. EMG data and EEG data, while both noisy, have been successfully combined in multi-modal systems in order to improve the accuracy of BCIs [38,39].
In addition to EEG and EMG recordings, differential electrophysiology can be also used to track eye-blinks via electrooculography (EOG). The affordances of EOG measurements are similar to the other differential bio signals: namely, the ability to trigger an event with bodily gestures (such as eye blinks, or a series of eye blinks in quick succession). EOG measurements are also used to remove eye-blink artifacts from other electrophysiological data [40].
Electrodermal activity, a measure of the skin’s conduciveness, is a similar, yet unique electrophysiological signal that has not, to our knowledge, been exploited in hearables. While true that the insides of the ear do not sweat [41], the outside or back of the ear offer a potential location to measure EDA. As opposed to more common locations such as the fingers or wrist, the ears are an area of the body that stays roughly at the same temperature [42]. Unlike differential electrophysiological measurements that require three electrodes, EDA measurements only require two electrodes. The affordance of adding EDA sensors to a hearable would be the ability to collect long-term information pertaining to the arousal level of the user; a feature that has been associated with fear, anger, startled response, orienting response, and sexual feelings [43].

3.3.3. Additional Bio-Signals

Beyond electrical measurements, research indicates that bio-acoustic sound recordings taken from within the ear using tiny microphones might offer unique insights into the user’s affective state. Bio-acoustic recordings may also be useful within applications that use audio solely as an input device, such as user authentication, which is especially important for wearable devices that might be shared by different users [44,45]. Bio-acoustic recordings have been used to monitor cardiac activity and respiration rate [11]. The affordances of monitoring user respiration rate include stress/exertion levels, triggers based on punctuated exhaling (such as laughter or sobbing), the detection of sleep apnea, etc. While reports indicate that respiration rate may be obtained from recordings of the inner ear, this does not mean that the ear is an ideal location for this measurement. As another example of a hearable related bio-acoustic signal, a team of researchers has investigated the use of bio-acoustics to capture “tooth click” gestures [43]. Much like ear-wiggles and eye blink triggers, it is possible to differentiate tooth clicks from different quadrants of the mouth, and therefore employ these clicks for hands-free computer interaction. Further, bio-acoustics could also be used in eating behavior monitoring applications [46].
One last type of signal that will be discussed is body temperature. The inner ear is a common source for measuring body temperature, and temperature sensors can be easily embedded within hearables. One particular application of in-ear temperature monitoring includes Yono’s device for fertility prediction [47]. Temperature sensors embedded within hearables could potentially also play a role in understanding bodily maps of emotion, as research suggests that certain types of emotion are associated with distinct temperature change patterns [48,49].
Summary: Bio signals afford the ability to infer the affective and cognitive state of the user, thus offering interesting applications within context- and affect-aware computing.
Table 1 presents a brief summary of the principle affordances offered by each feature category. We use the term “principle affordance” in a general sense to summarize the primary benefit associated with each feature, as well as to highlight how ear-mounted devices have evolved over time. While these features and their associated primary affordances hold across other wearable devices, the proposed summary is specifically aimed towards understanding the evolution of hearables. Further, Table 2 presents a summarization of the principle affordances offered by each of the biosensors discussed in the section above.

4. Comparison of Hearables to Other Wearables

The previous section detailed some of the affordances of hearables, and below, we expound some of the differences between hearables and other wearable computers, such as wrist-mounted or eye-mounted devices. In particular, we investigate several design considerations for both types of devices, including visibility, fit, robustness, ruggedness, stowability, battery life, and computational limitations.

4.1. Visibility (Covertness)

The visibility of a wearable device is largely determined by its size and where it is worn. Even though hearables tend to be one of the smaller wearable computing devices, their ear-mounted location tends to make them highly visible in comparison to a smart-watch or pedometer. There is a spectrum of “visibility” for hearables that ranges from completely covert to intentionally visible. Several devices that are essentially invisible, such as Third Skin’s Hy [50], have been noted as being similar to many recent hearing aid designs [13,51], and it has yet to be determined whether this will be desirable to consumers. The main affordance of covert designs is that they avoid social judgments that can be passed on visibly noticeable wearables [52], thereby blending seamlessly into the background of our lives and potentially becoming more likely for user adoption. Another advantage of covert hearables is that they often result in minimal passive noise reduction compared to devices that may block the entire ear canal. However, a downside to completely covert hearables is the confusion that can potentially arise when interacting with the device via voice commands in the presence of others.
Hearables that fit completely within the ear, such as Rowkin’s Bit or the Bragi Dash, can result in passive noise reduction. To address this, several hearable devices use external microphones to pass sound thru the device, thus allowing the user to adjust the audio mix between outside sounds and device sounds. Such devices, which may be used to augment a user’s nearby auditory scene, are quite similar to hearing aids. Several companies, e.g., Resound [53] and Starkey [51], already market hybrid hearing aids/hearables as “smart hearing aids”. Yet another affordance of larger hearables that block the ear canal is the potential to double as water-tight ear-plugs for swimmers, thus offering similar affordances as waterproof smart-watches. In such cases, these hearables often offer stand-alone applications that provide some functionality while the device is not paired to a master device (e.g., mp3 player, computer-generated speech from virtual coaches, etc.).
Finally, hearables can also be designed as high-tech fashion pieces, with such devices protruding from the ear in a way that is plainly visible. The technological affordances offered by this design include better options for microphone placement but come with a potential risk for social judgments [54]. For example, Apple’s AirPods, i.am+’s Buttons, Human’s Sound, and several other hearables noticeably protrude or hang from the ear in order to better capture voice interactions. However, it should be noted that as hearables grow in size, so too does their potential to become physically loosened by contact.

4.2. Fit

Another design aspect of wearables pertains to their comfort and fit. Ideally, wearable devices are designed for continuous use, and they should fit securely and comfortably. Unlike wrist-mounted devices, which are easily adjustable and customizable, eye- and ear-mounted devices involve a more detailed fitting process. Some hearables are sold with different sized rubber ear inserts that allow users to choose the best generic fit, whereas other hearables are made from custom ear molds. Yet other hearables utilize form-fitting memory foam or silicone to allow generic fabrication and also form-fitting comfort. Like other wearable devices, the goal of designers involves ensuring that hearable devices are comfortable enough to be worn all day, both indoors and outdoors.
In tandem with comfort comes the question of obtaining a secure fit. One of the major affordances of hearables is that the ear encounters minimal physical contact throughout the day (especially relative to the hands or wrist). In many ways, the natural shape of the ear helps to secure and protect these devices from falling out. The smaller and more covert the device, the less likely it is to become accidentally dislodged.

4.3. Robust Connection

For wireless wearable devices, a robust wireless connection is an important design aspect, and the vast majority of wearables utilize short-range wireless Bluetooth technology for this purpose. For hearables in particular, obtaining a robust signal, or two signals for binaural systems, between the master source and receiver presents a unique design challenge. Some of these challenges include the ability to receive a signal when the master device (such as a smartphone) is placed in a user’s pocket, or the ability to compensate for latency between left and right earbuds in stereo applications. With regards to the former issue, the physical distance between one’s pocket and one’s ear may not seem that great, but the human body blocks a large portion of the signal, and the ear canal is surrounded by the dense bone of the skull. This, combined with the body’s ability to absorb radio signals, makes maintaining a robust connection between hearables and a paired source difficult. Bluetooth technology continues to evolve, and the latest specification (Bluetooth 5) is scheduled to increase signal range, speed, and bandwidth [55] in a way that might be well suited for hearables connectivity and robustness. However, increased signal range also means that Bluetooth information could be more prone to malicious interception.
The second major issue pertaining to robust wireless signals involves synchronizing two channel audio signals between wireless earbuds. Due to current limitations of the Bluetooth protocol, only one audio signal can be sent within an audio stream, which poses a serious problem for listening to stereophonic music. One current solution involves a two-step communication process, in which the sound source first communicates the entire stereo signal with only the left or right earbud, and then this earbud parses the two signals and passes the extra signal along to the remaining earbud via a newly generated radio signal or via near-field magnetic induction [56,57]. This two-step solution can potentially introduce asynchronies in the system that the ear is quite good at detecting. Therefore, for hearables, more so than other wearable devices, the robustness and reliability of the communication signals can be critical.

4.4. Ruggedness and Stowability

Two additional design concerns for wearable devices include ruggedness and stowability. Wearables must be rugged enough to withstand daily wear and tear from insertion, removal, and any accidental contact. In general, hearables tend to be made of hard plastic or soft silicone components, which combined with their light weight makes them generally well equipped to withstand most impacts. Unlike other types of wearables, hearables are not affected by cracked displays. Truly, one of the roughest activities for hearables might involve storing them when not in use, as most hearables readily fit inside a small pocket. The general solution for stowability has been to combine portable battery chargers and carrying cases, so that when the device is not being used, it can be charged. As battery life improves, users may find that the charges are needless on the go, at which point, designing products to withstand “pocket abuse” may become more important.

4.5. Data Input

Hearables, like other wearables, can facilitate many different types of data input, including tactile, motion, and biological control. Multiple devices can often provide complimentary, and sometimes overlapping, functionality.
Tactile device control is a feature of nearly all wearable devices, including wrist, eye, ear, and other worn devices. Because tactile control is strongly guided by visual location, this feature is perhaps least useful for hearables, and a similar limitation might be imposed for smart eye ware. The range of tactile control currently available within hearables varies from none (such as the Apple AirPods), to 1 multi-function button (e.g., Rowkin), to devices with multiple buttons and/or complex touch screens (e.g., Bragi). In general, simpler designs such as those found on many fitness trackers that favor simplicity and reduced functionality, seem to be more popular for wearable devices, including hearables.
Depending on the recording site, user motion data from accelerometers, digital compasses, and GPS units offer different affordances for different applications. For fitness tracking applications such as tracking footsteps, any recording site from head to toe may potentially capture the desired data. However, for specific applications, such as manipulating augmented reality displays, the maneuverability of wrist-mounted wearables offers distinct advantages over eye- or ear-mounted solutions. For other systems, it may be useful to combine user movement data from multiple wearable sources, such as tracking both user location (from a wrist-mounted device) and orientation (from an ear-mounted device) within real or virtual environments.
Of all of the wearable devices that are designed to collect bio signals, hearables offer perhaps the largest number of affordances. Existing wrist-mounted wearables that are currently available include devices designed to record PPG, EMG, and EDA. While these measurements can be obtained from the wrist, there are multiple reasons why this location may not be ideal [58,59]. For purposes of BCI, head-mounted wearables have been devised to capture basic EEG measurements, and despite the bulk and visibility of these devices, the data that they capture is more powerful than the single channel EEG systems that are currently being prototyped for hearables. Beyond the quality of the signals, perhaps the main affordance of hearables, relative to other wearables, for providing bio signal input data pertains to the multitude of signals that could be collected from one particular body site; a site that happens to be relatively covert, provides a secure attachment, and leaves both the hands and eyes free for other interactions.

4.6. Battery Life

Battery capacity plays a major role in the design of wearable computing devices, and hearables are no exception. Battery life of ear-mounted wearables tends to be shorter than other types of wearables, and many models are sold with carrying cases that double as portable charging solutions. When in continuous use, battery life for most hearables is typically not longer than 5 h [60], thus limiting their ability to be worn all day long. Similar battery life issues have been overcome in the design of smart wristwatches [61]. If hearables are to properly blend into the background of everyday life, advances in battery technology, wireless charging [62] wireless protocols, or self-charging devices [63] will be required.

4.7. Computational Capacity

Related to battery life, wearable devices must make a design trade-off between extended battery life and computational ability. In order to maximize both of these features, product designers must make critical decisions about computational processing locally on the device itself, remotely on a paired device, or within a cloud-computing environment. The vast majority of hearables rely on a paired device for data processing, such as telecommunication processing, music/audio processing, and/or sensor data processing. In those cases where information is processed locally (e.g., the Bragi Dash has an on-board mp3 player), battery life tends to be improved. However, with the proliferation of on-device machine learning chips and machine learning algorithm proxies [61,64], it may soon be possible to improve both the battery life and computational capabilities of ear-mounted wearable computing devices.
The current wearables market could be subdivided many ways on the basis of functionality, including health monitoring, performance tracking, life-streaming, intelligent personal assistants, and multifunctional devices. While each functional subcategory could be optimized for a given purpose, it seems plausible that users might like the idea of owning more than one type of device. While there have been many affordances described above, hearables are likely not the ideal solution for all applications, yet they do offer a number of affordances that are particularly well suited for ubiquitous and affect-aware computing. Further, as discussed below, the number of new products entering the hearables market suggests that the most functional and useful products are still to come.

5. The Hearables Market

As a method for understanding the affordances of hearables, we elected to quantify several aspects of hearables that are currently available, as well as devices that are proposed to become available soon. We culled a list of devices from several existing resources [4,59,65], and then verified the affordances offered by each device via online manufacturer data. The section below highlights some of the general trends found within this analysis.
Despite the number of affordances offered by hearables, devices that are currently on the market typically utilize only a select few. Figure 2 shows an affordance break down for 20 different devices that are currently available on the market. The majority of available hearables are headsets (e.g., combined headphones/microphone) combined with some type of tactile control input. Models that utilize these features might be considered basic multi-purpose hearables and have a wide variety of uses, including smartphone/IPA interaction, basic music listening, sonic environment control, hearing aids, and more. These devices vary widely with regards to cost, design, and marketing, despite the similarity of their affordances. The largest percentage of these devices is marketed as basic music/smartphone interfaces, whereas the next largest group is marketed as environment sound control devices. Within our list of next generation proposed devices (see Figure 3), there seems to be tendency shift towards sound control devices and away from general-purpose smartphone interfaces. Within this small sample, the main point is not to suggest that one type of hearable is better or more likely to find widespread adoption, but rather to provide some evidence that hearables in general, even with very similar affordances, will continue to evolve within distinct applications. The diverse evolution of these general-purpose hearables within multiple applications makes these devices unique relative to other wearables.
The next most common features within the available hearables market are embedded heart rate sensors (5 models) and accelerometer/gyroscope sensors (4 models). Devices taking advantage of these affordances are useful for both health and fitness monitoring. However, as mentioned above, user motion data have uses that extend beyond health applications, as these sensors also afford the ability to capture natural user gestures. The dual utility of accelerometer sensors within hearable devices therefore has strong potential, as users may choose to purchase an accelerometer device solely for gesture-based control or for health monitoring, while gaining the ability to take advantage of both applications. Figure 3 provides some evidence in support of this conjecture, as there are more proposed devices that plan to take advantage of accelerometer sensors than PPG heart rate sensors. Again, this finding is not intended to imply that fitness/health hearables are more likely than other hearables to find widespread adoption, but rather to bring attention to the affordances that hearables offer relative to other types of wearables, as well as the trend towards more hearables offering these affordances.
The remaining affordances that we examined, including light, temperature, compass, and electrophysiology sensors, were not found to be as common within existing hearables. With the exception of devices with embedded light sensors that are used to detect if they are in a user’s ear, devices falling within these categories might be classified as special purposes devices. Such devices have been marketed for a variety of applications, including driver alertness [66], performance monitoring [66], and audio navigation devices [67,68]. As described above, head-mounted compass data afford interesting applications for Augmented Audio Reality; within the set of proposed devices, compass sensors exhibited the largest change increase (5 new proposed devices). Temperature sensors, which are useful within industrial and health-related performance monitoring applications, were also found in more proposed devices than current devices. Even though the utility of these extra affordances remains unclear, proposed devices continue to consider additional sensors as inclusions.
In comparing the currently available devices to the next generation of hearables, it appears that the perfect feature set is still yet to be found. While hearables can record a wide variety of sensory input, this does not necessarily imply that the ears are the best location to capture such information. As reviewed above, hearables have evolved over the last century in order to meet a variety of application-specific needs. New technology and new applications will likely continue to drive this evolution, likely towards full-fledged ear-mounted computers designed for discretely continuous use.

6. Conclusions

One reason why Weiser’s predictions for ubiquitous computing have largely become a reality is that they were formulated with a focus on natural human interactions and perceptual proclivities. For these same reasons, hearables, which take advantage of our auditory system’s ability to multi-task, as well as to facilitate speech-based interaction, offer great potential for the future of personal and ubiquitous computing. In particular, we speculate that ear-mounted wearable devices will play a large role within the evolution of affective computing, especially considering the multitude of biosensors that may be placed on or within the ear. When computing systems can enter the cycle of sensing and responding to the current physiological state of the user, a wide array of new applications are possible, such as smart A.I. assistants that increase or decrease the quantity and type of information presented to the user according to their current state, fitness/workaround coaches that both monitor and motivate users, systems that automatically connect users at mutually good moments, and systems that help users regulate stressful or difficult situations. As stated throughout this review, there are many useful types of information that can be obtained from within and around the ear, and users are already familiar with ear-mounted devices due to their long evolutionary history.
The affordances offered by obtaining additional user information from ear-mounted devices could lead to new types of context-aware or affect-aware computing, especially when paired with emerging techniques for finding patterns within large-scale datasets. Despite the factors that currently limit the utility of hearables, including signal reliability, battery life, and hearable specific software development, the current market contains a sizable number of devices that cater to specific niches, from basic multi-purpose smartphone interfaces to high tech fitness and health monitors. Further, evidence from trends within proposed devices offers modest evidence that the affordances of hearables will continue to be explored. Owing to the uniqueness of the affordances of hearables, we believe that it will not be long before nearly everyone has “a computer in their ear”.

Author Contributions

Conceptualization, J.P. and M.K.-O.; Methodology, J.P. and M.K.-O.; Investigation J.P.; Resources, J.P. and M.K.-O.; Writing-Original Draft Preparation, J.P.; Writing-Review & Editing, J.P. and M.K.-O.; Supervision, M.K.-O.; Project Administration, M.K.-O.; Funding Acquisition, M.K.-O.

Funding

This research was funded by Concordia University Start-up Funds.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weiser, M. The computer for the 21st century. Sci. Am. 1991, 265, 94–104. [Google Scholar] [CrossRef]
  2. Guler, S.D.; Gannon, M.; Sicchio, K. Crafting Wearables: Blending Technology with Fashion; Apress: New York, NY, USA, 2016; p. 24. 216p. [Google Scholar]
  3. Hunn, N. Hearables—The New Wearables. Available online: http://www.nickhunn.com/hearables-the-new-wearables (accessed on 1 February 2014).
  4. Hunn, N. The Market for Hearable Devices 2016–2020. Available online: http://www.nickhunn.com/wp-content/uploads/downloads/2016/11/The-Market-for-Hearable-Devices-2016-2020.pdf (accessed on 20 January 2016).
  5. Gibson, J.J. The Ecological Approach to Visual Perception: Classic Edition; Psychology Press: Hove, UK, 2014. [Google Scholar]
  6. Gaver, W.W. Technology affordances. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 27 April–2 May 1991; pp. 79–84. [Google Scholar]
  7. Bregman, A.S. Auditory Scene Analysis: The Perceptual Organization of Sound; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
  8. Cherry, E.C. Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 1953, 25, 975–979. [Google Scholar] [CrossRef]
  9. Picard, R.W. Affective Computing; MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
  10. LeBoeuf, S.F.; Aumer, M.E.; Kraus, W.E.; Johnson, J.L.; Duscha, B. Earbud-based sensor for the assessment of energy expenditure, heart rate, and VO2max. Med. Sci. Sports Exerc. 2014, 46, 1046. [Google Scholar] [CrossRef] [PubMed]
  11. Goverdovsky, V.; von Rosenberg, W.; Nakamura, T.; Looney, D.; Sharp, D.J.; Papavassiliou, C.; Morrell, M.J.; Mandic, D.P. Hearables: Multimodal physiological in-ear sensing. Sci. Rep. 2017, 7, 6948. [Google Scholar] [CrossRef] [PubMed]
  12. Stankievech, C. From stethoscopes to headphones: An acoustic spatialization of subjectivity. Leonardo Music J. 2007, 17, 55–59. [Google Scholar] [CrossRef]
  13. Mills, M. Hearing aids and the history of electronics miniaturization. IEEE Ann. Hist. Comput. 2011, 33, 24–45. [Google Scholar] [CrossRef]
  14. Sterne, J. The Audible Past: Cultural Origins of Sound Reproduction; Duke University Press: Durham, NC, USA, 2003. [Google Scholar]
  15. Howeth, L.S. History of Communications Electronics in the United States Navy; For sale by the Superintendent of Documents; US Government Printing Office: Washington, DC, USA, 1963. [Google Scholar]
  16. Weber, H. Head cocoons: A sensori-social history of earphone use in West Germany, 1950–2010. Senses Soc. 2010, 5, 339–363. [Google Scholar] [CrossRef]
  17. Everrett, T.M. Ears Wide Shut: Headphones and Moral Design; Carleton University: Ottawa, ON, Canada, 2014. [Google Scholar]
  18. Hallock, J. A Brief History of VoIP, Evolution and Trends in Digital Media Technologies. 2004. Available online: http://www.joehallock.com/edu/pdfs/Hallock_J_VoIP_Past.pdf (accessed on 12 July 2018).
  19. Byrne, S. Survey Results: Devices We Use Daily in 2014. Available online: https://www.cnet.com/au/news/survey-results-devices-we-use-daily-in-2014/ (accessed on 15 January 2014).
  20. Bragi. The Dash. Available online: https://www.bragi.com/thedash/ (accessed on 3 February 2018).
  21. Yang, C.-C.; Hsu, Y.-L. A review of accelerometry-based wearable motion detectors for physical activity monitoring. Sensors 2010, 10, 7772–7788. [Google Scholar] [CrossRef] [PubMed]
  22. Coulson, M. Attributing emotion to static body postures: Recognition accuracy, confusions, and viewpoint dependence. J. Nonverbal Behav. 2004, 28, 117–139. [Google Scholar] [CrossRef]
  23. Bederson, B.B. Audio augmented reality: A prototype automated tour guide. In Proceedings of the Conference Companion on Human Factors in Computing Systems, Denver, CO, USA, 7–11 May 1995; pp. 210–211. [Google Scholar]
  24. Mynatt, E.D.; Back, M.; Want, R.; Frederick, R. Audio Aura: Light-weight audio augmented reality. In Proceedings of the 10th Annual ACM Symposium on User Interface Software and Technology, Banff, AB, Canada, 14–17 October 1997; pp. 211–212. [Google Scholar]
  25. Heller, F.; Krämer, A.; Borchers, J. Simplifying orientation measurement for mobile audio augmented reality applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, ON, Canada, 26 April–1 May 2014; pp. 615–624. [Google Scholar]
  26. Vallozzi, L.; Vandendriessche, W.; Rogier, H.; Hertleer, C.; Scarpello, M.L. Wearable textile GPS antenna for integration in protective garments. In Proceedings of the 2010 Fourth European Conference on Antennas and Propagation (EuCAP), Barcelona, Spain, 12–16 April 2010; pp. 1–4. [Google Scholar]
  27. Sano, A.; Tomita, T.; Oba, H. Applications using earphone with biosignal sensors. Hum. Interface Soc. Meet. 2010, 12, 1–6. [Google Scholar]
  28. Poh, M.-Z.; Kim, K.; Goessling, A.D.; Swenson, N.C.; Picard, R.W. Heartphones: Sensor earphones and mobile application for non-obtrusive health monitoring. In Proceedings of the 2009 International Symposium on Wearable Computers, ISWC’09, Linz, Austria, 4–7 September 2009; pp. 153–154. [Google Scholar]
  29. Tamura, T.; Maeda, Y.; Sekine, M.; Yoshida, M. Wearable photoplethysmographic sensors—Past and present. Electronics 2014, 3, 282–302. [Google Scholar] [CrossRef]
  30. Ernst, G. Heart Rate Variability; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  31. Lin, W.-H.; Wu, D.; Li, C.; Zhang, H.; Zhang, Y.-T. Comparison of heart rate variability from PPG with that from ECG. In Proceedings of the International Conference on Health Informatics, Angers, France, 3–6 March 2014; pp. 213–215. [Google Scholar]
  32. Hanning, C.; Alexander-Williams, J. Pulse oximetry: A practical review. BMJ 1995, 311, 367. [Google Scholar] [CrossRef] [PubMed]
  33. Goverdovsky, V.; Looney, D.; Kidmose, P.; Mandic, D.P. In-ear EEG from viscoelastic generic earpieces: Robust and unobtrusive 24/7 monitoring. IEEE Sens. J. 2016, 16, 271–277. [Google Scholar] [CrossRef]
  34. Nguyen, A.; Raghebi, Z.; Banaei-Kashani, F.; Halbower, A.C.; Vu, T. LIBS: A low-cost in-ear bioelectrical sensing solution for healthcare applications. In Proceedings of the Eighth Wireless of the Students, by the Students, and for the Students Workshop, New York, NY, USA, 3–7 October 2016; pp. 33–35. [Google Scholar]
  35. Debener, S.; Emkes, R.; de Vos, M.; Bleichner, M. Unobtrusive ambulatory EEG using a smartphone and flexible printed electrodes around the ear. Sci. Rep. 2015, 5, 16743. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Stuffler, M. EMG and EEG Input in Human Computer Interaction. In Secondary Tasks: Hauptseminar Medieninformatik WS; Technical Report LMU-MI-2014-1; Ludwig Maximilian University of Munich: München, Germany, 2014; pp. 96–101. [Google Scholar]
  37. Matthies, D.J. InEar BioFeedController: A headset for hands-free and eyes-free interaction with mobile devices. In Proceedings of the CHI’13 Extended Abstracts on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; pp. 1293–1298. [Google Scholar]
  38. Lu, Z.; Chen, X.; Li, Q.; Zhang, X.; Zhou, P. A Hand Gesture Recognition Framework and Wearable Gesture-Based Interaction Prototype for Mobile Devices. IEEE Trans. Hum. Mach. Syst. 2014, 44, 293–299. [Google Scholar] [CrossRef]
  39. Leeb, R.; Sagha, H.; Chavarriaga, R.; del R Millán, J. A hybrid brain–computer interface based on the fusion of electroencephalographic and electromyographic activities. J. Neural Eng. 2011, 8, 025011. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Fatourechi, M.; Bashashati, A.; Ward, R.K.; Birch, G.E. EMG and EOG artifacts in brain computer interface systems: A survey. Clin. Neurophysiol. 2007, 118, 480–494. [Google Scholar] [CrossRef] [PubMed]
  41. Pinkus, H. Embryology and anatomy of skin. Skin 1971, 1–28. [Google Scholar]
  42. Houdas, Y.; Ring, E. Human Body Temperature: Its Measurement and Regulation; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  43. Boucsein, W. Electrodermal Activity; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  44. Liu, R.; Cornelius, C.; Rawassizadeh, R.; Peterson, R.; Kotz, D. Vocal Resonance: Using Internal Body Voice for Wearable Authentication. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2018, 2, 19. [Google Scholar] [CrossRef]
  45. Peng, G.; Zhou, G.; Nguyen, D.T.; Qi, X.; Yang, Q.; Wang, S. Continuous authentication with touch behavioral biometrics and voice on wearable glasses. IEEE Trans. Hum. Mach. Syst. 2017, 47, 404–416. [Google Scholar] [CrossRef]
  46. Nishimura, J.; Kuroda, T. Eating habits monitoring using wireless wearable in-ear microphone. In Proceedings of the 3rd International Symposium on Wireless Pervasive Computing, ISWPC 2008, Santorini, Greece, 7–9 May 2008; pp. 130–132. [Google Scholar]
  47. YONO Labs. The World’s First In-Ear Ovulation Predictor. Available online: https://www.yonolabs.com (accessed on 30 January 2018).
  48. Nummenmaa, L.; Glerean, E.; Hari, R.; Hietanen, J.K. Bodily maps of emotions. Proc. Natl. Acad. Sci. USA 2014, 111, 646–651. [Google Scholar] [CrossRef] [PubMed]
  49. Ioannou, S.; Gallese, V.; Merla, A. Thermal infrared imaging in psychophysiology: Potentialities and limits. Psychophysiology 2014, 51, 951–963. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Third Skin. Third Skin|Hy—Low Profile, Audio Transparent Earphones. Available online: http://www.thirdsk.in (accessed on 4 February 2018).
  51. StarkeyPro. Halo 2|StarkeyPro. Available online: https://starkeypro.com/products/wireless-hearing-aids/halo-2 (accessed on 4 February 2018).
  52. Due, B.L. The Future of Smart Glasses: An Essay about Challenges and Possibilities with Smart Glasses; Working papers on interaction and communication; University of Copenhagen: Copenhagen, Denmark, 2014; Volume 1, pp. 1–21. [Google Scholar]
  53. ReSound. LiNX2. Available online: http://www.resound.com/en-US/hearing-aids/linx2 (accessed on 8 February 2018).
  54. Starner, T. The challenges of wearable computing: Part 2. IEEE Micro 2001, 21, 54–67. [Google Scholar] [CrossRef]
  55. Bluetooth 5 Quadruples Range, Doubles Speed, Increases Data Broadcasting Capacity by 800%. Available online: https://www.bluetooth.com/news/pressreleases/2016/06/16/-bluetooth5-quadruples-rangedoubles-speedincreases-data-broadcasting-capacity-by-800 (accessed on 8 February 2016).
  56. Kim, J.S.; Kim, C.H. A review of assistive listening device and digital wireless technology for hearing instruments. Korean J. Audiol. 2014, 18, 105. [Google Scholar] [CrossRef] [PubMed]
  57. Galster, J.A. A new method for wireless connectivity in hearing aids. Hear. J. 2010, 63, 36–38. [Google Scholar] [CrossRef]
  58. Wang, R.; Blackburn, G.; Desai, M.; Phelan, D.; Gillinov, L.; Houghtaling, P.; Gillinov, M. Accuracy of wrist-worn heart rate monitors. JAMA Cardiol. 2017, 2, 104–106. [Google Scholar] [CrossRef] [PubMed]
  59. The Hearable Universe. (H)earable: World of Smart Headphones. Available online: http://hearable.world/the-hearable-universe-2017 (accessed on 23 January 2016).
  60. Nuheara. How IQbuds Battery Life Compares to Others. Available online: https://www.nuheara.com/wireless-earbuds-battery-life/ (accessed on 6 July 2018).
  61. Rawassizadeh, R.; Pierson, T.J.; Peterson, R.; Kotz, D. NoCloud: Exploring Network Disconnection through On-Device Data Analysis. IEEE Pervasive Comput. 2018, 17, 64–74. [Google Scholar] [CrossRef]
  62. Lu, X.; Wang, P.; Niyato, D.; Kim, D.I.; Han, Z. Wireless charging technologies: Fundamentals, standards, and network applications. IEEE Commun. Surv. Tutor. 2016, 18, 1413–1452. [Google Scholar] [CrossRef]
  63. Pu, X.; Li, L.; Liu, M.; Jiang, C.; Du, C.; Zhao, Z.; Hu, W.; Wang, Z.L. Wearable self-charging power textile based on flexible yarn supercapacitors and fabric nanogenerators. Adv. Mater. 2016, 28, 98–105. [Google Scholar] [CrossRef] [PubMed]
  64. Mathew, B.K.; Ng, J.C.; Zerbe, J.L.; Apple Inc. Using proxies to enable on-device machine learning. U.S. Patent Application No. 15/275355, 24 September 2016. [Google Scholar]
  65. Everyday Hearing. The Complete Guide to Hearable Technology in 2017. Available online: https://www.everydayhearing.com/hearing-technology/articles/hearables (accessed on 23 January 2017).
  66. Maven Machines. Co-Pilot. Available online: http://mavenmachines.com/co-pilot/ (accessed on 20 February 2018).
  67. Madden, P.; Leaman, R.; Corrigan, D. Cities Unlocked, Phase I Report. Available online: https://futurecities.catapult.org.uk/wp-content/uploads/2015/10/CUReport_WEB.pdf (accessed on 10 February 2018).
  68. Aftershokz. Available online: https://aftershokz.com/ (accessed on 20 February 2018).
Figure 1. Evolutionary development of hearables.
Figure 1. Evolutionary development of hearables.
Inventions 03 00048 g001
Figure 2. List of hearables devices, and their associated features, that are currently available.
Figure 2. List of hearables devices, and their associated features, that are currently available.
Inventions 03 00048 g002
Figure 3. List of proposed hearables devices, and their associated features.
Figure 3. List of proposed hearables devices, and their associated features.
Inventions 03 00048 g003
Table 1. Summary of ear-mounted affordances.
Table 1. Summary of ear-mounted affordances.
FeaturePrinciple AffordanceRepresentative Application
SpeakersIsolated presentation of audioMusic Listening
MicrophonesTwo-way sound communicationTelecommunication
Wireless TransceiversUntethered accessibilityFitness; Driving
Tactile ControlsEvent Triggering & computer controlRemote User Interaction
Movement SensorsUser gesture & location monitoringAudio Navigation
Biological SensorsUser affective and cognitive state monitoringAffective Computing
Table 2. Summary of ear-mounted bio signal inputs.
Table 2. Summary of ear-mounted bio signal inputs.
BiosensorPrinciple Affordance
PhotoplethsmographyStress levels & Activity Monitoring
ElectromyographyLocal Movement & Muscular Effort
ElectroculographyAlertness & Attention
ElectroencephalographyCognitive Load & Alertness
Electrodermal ActivityStress & Arousal Levels
Bio-Acoustic MicrophonesRespiration & Food Intake Monitoring
Body TemperatureVital Sign Monitoring & Stress Levels

Share and Cite

MDPI and ACS Style

Plazak, J.; Kersten-Oertel, M. A Survey on the Affordances of “Hearables”. Inventions 2018, 3, 48. https://doi.org/10.3390/inventions3030048

AMA Style

Plazak J, Kersten-Oertel M. A Survey on the Affordances of “Hearables”. Inventions. 2018; 3(3):48. https://doi.org/10.3390/inventions3030048

Chicago/Turabian Style

Plazak, Joseph, and Marta Kersten-Oertel. 2018. "A Survey on the Affordances of “Hearables”" Inventions 3, no. 3: 48. https://doi.org/10.3390/inventions3030048

Article Metrics

Back to TopTop