Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = in-ear audio

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1549 KB  
Article
Equalizing the In-Ear Acoustic Response of Piezoelectric MEMS Loudspeakers Through Inverse Transducer Modeling
by Oliviero Massi, Riccardo Giampiccolo and Alberto Bernardini
Micromachines 2025, 16(6), 655; https://doi.org/10.3390/mi16060655 - 29 May 2025
Viewed by 2693
Abstract
Micro-Electro-Mechanical Systems (MEMS) loudspeakers are attracting growing interest as alternatives to conventional miniature transducers for in-ear audio applications. However, their practical deployment is often hindered by pronounced resonances in their frequency response, caused by the mechanical and acoustic characteristics of the device structure. [...] Read more.
Micro-Electro-Mechanical Systems (MEMS) loudspeakers are attracting growing interest as alternatives to conventional miniature transducers for in-ear audio applications. However, their practical deployment is often hindered by pronounced resonances in their frequency response, caused by the mechanical and acoustic characteristics of the device structure. To mitigate these limitations, we present a model-based digital signal equalization approach that leverages a circuit equivalent model of the considered MEMS loudspeaker. The method relies on constructing an inverse circuital model based on the nullor, which is implemented in the discrete-time domain using Wave Digital Filters (WDFs). This inverse system is employed to pre-process the input voltage signal, effectively compensating for the transducer frequency response. The experimental results demonstrate that the proposed method significantly flattens the Sound Pressure Level (SPL) over the 100 Hz-10 kHz frequency range, with a maximum deviation from the target flat frequency response of below 5 dB. Full article
(This article belongs to the Special Issue Exploration and Application of Piezoelectric Smart Structures)
Show Figures

Figure 1

46 pages, 2469 KB  
Review
A Review on Head-Related Transfer Function Generation for Spatial Audio
by Valeria Bruschi, Loris Grossi, Nefeli A. Dourou, Andrea Quattrini, Alberto Vancheri, Tiziano Leidi and Stefania Cecchi
Appl. Sci. 2024, 14(23), 11242; https://doi.org/10.3390/app142311242 - 2 Dec 2024
Viewed by 7037
Abstract
A head-related transfer function (HRTF) is a mathematical model that describes the acoustic path between a sound source and a listener’s ear. Using binaural synthesis techniques, HRTFs play a crucial role in creating immersive audio experiences through headphones or loudspeakers, using binaural synthesis [...] Read more.
A head-related transfer function (HRTF) is a mathematical model that describes the acoustic path between a sound source and a listener’s ear. Using binaural synthesis techniques, HRTFs play a crucial role in creating immersive audio experiences through headphones or loudspeakers, using binaural synthesis techniques. HRTF measurements can be conducted either with standardised mannequins or with in-ear microphones on real subjects. However, various challenges arise in, for example, individual differences in head shape, pinnae geometry, and torso dimensions, as well as in the extensive number of measurements required for optimal audio immersion. To address these issues, numerous methods have been developed to generate new HRTFs from existing data or through computer simulations. This review paper provides an overview of the current approaches and technologies for generating, adapting, and optimising HRTFs, with a focus on physical modelling, anthropometric techniques, machine learning methods, interpolation strategies, and their practical applications. Full article
(This article belongs to the Special Issue Spatial Audio and Sound Design)
Show Figures

Figure 1

15 pages, 1495 KB  
Article
Classification of Breathing Phase and Path with In-Ear Microphones
by Malahat H. K. Mehrban, Jérémie Voix and Rachel E. Bouserhal
Sensors 2024, 24(20), 6679; https://doi.org/10.3390/s24206679 - 17 Oct 2024
Cited by 1 | Viewed by 2101
Abstract
In recent years, the use of smart in-ear devices (hearables) for health monitoring has gained popularity. Previous research on in-ear breath monitoring with hearables uses signal processing techniques based on peak detection. Such techniques are greatly affected by movement artifacts and other challenging [...] Read more.
In recent years, the use of smart in-ear devices (hearables) for health monitoring has gained popularity. Previous research on in-ear breath monitoring with hearables uses signal processing techniques based on peak detection. Such techniques are greatly affected by movement artifacts and other challenging real-world conditions. In this study, we use an existing database of various breathing types captured using an in-ear microphone to classify breathing path and phase. Having a small dataset, we use XGBoost, a simple and fast classifier, to address three different classification challenges. We achieve an accuracy of 86.8% for a binary path classifier, 74.1% for a binary phase classifier, and 67.2% for a four-class path and phase classifier. Our path classifier outperforms existing algorithms in recall and F1, highlighting the reliability of our approach. This work demonstrates the feasibility of the use of hearables in continuous breath monitoring tasks with machine learning. Full article
(This article belongs to the Special Issue Sensors for Breathing Monitoring)
Show Figures

Figure 1

19 pages, 26982 KB  
Article
Detection of the Compromising Audio Signal by Analyzing Its AM Demodulated Spectrum
by Alexandru Madalin Vizitiu, Lidia Dobrescu, Bogdan Catalin Trip, Vlad Florian Butnariu, Cristian Molder and Simona Viorica Halunga
Symmetry 2024, 16(2), 209; https://doi.org/10.3390/sym16020209 - 9 Feb 2024
Cited by 5 | Viewed by 1902
Abstract
The information technology and communication (IT&C) market consists of computing and telecommunication technology systems, which also include a variety of audio devices. Preserving the confidentiality of transmitted information through these devices stands as a critical concern across various domains and businesses. Notably, spurious [...] Read more.
The information technology and communication (IT&C) market consists of computing and telecommunication technology systems, which also include a variety of audio devices. Preserving the confidentiality of transmitted information through these devices stands as a critical concern across various domains and businesses. Notably, spurious electromagnetic emanations emitted by audio devices can be captured and processed, potentially leading to eavesdropping incidents. The evaluation of electronic devices for potential security vulnerabilities often involves employing Transient Electromagnetic Pulse Emanation Standard (TEMPEST) technology. This paper introduces a novel approach to TEMPEST testing specifically tailored for audio devices. The outcomes of the proposed approach offer valuable insights into TEMPEST equipment testing, aiming to mitigate the potential risks posed by threats exploitable by eavesdroppers in everyday scenarios. The present work delves into the examination of two ubiquitous global electronic devices: a notebook and a pair of in-ear headphones. The symmetrical framework of this study arises from the intrinsic similarity that, despite belonging to distinct categories, both devices possess the capability to emit electromagnetic emissions that contain compromised audio signals. This assertion is substantiated by the measurement results presented herein. The proposed methodology involves the analysis of the audio amplitude modulation (AM) demodulated signal in the frequency domain. This innovative approach not only mitigates operator fatigue but also significantly reduces the testing time required for these devices and instrument running hours and leads to the development of new applications. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

13 pages, 2102 KB  
Article
Headphone Audio in Training Systems or Systems That Convey Important Sound Information
by Rafal Mlynski
Int. J. Environ. Res. Public Health 2022, 19(5), 2579; https://doi.org/10.3390/ijerph19052579 - 23 Feb 2022
Cited by 2 | Viewed by 2801
Abstract
In the work environment, miniature electroacoustic transducers are often used in communication, for the transmission of warning signals or during training. They can be used in headphones or mounted in personal protective equipment. It is often important to reproduce sounds accurately. The purpose [...] Read more.
In the work environment, miniature electroacoustic transducers are often used in communication, for the transmission of warning signals or during training. They can be used in headphones or mounted in personal protective equipment. It is often important to reproduce sounds accurately. The purpose of this work was to assess audio strips by comparing the frequency response of the signal in the electrical outputs of six common-purpose devices. Based on the risk of hearing damage, the level of noise exposure was assessed. The following headphones were investigated: low-budget closed-back, open-back for instant messengers, open-back for music, and in-ear. A head and torso simulator with a transfer function was used. The most uniform shape of the frequency response of the signal at the electrical outputs was found to be in smartphones. Sound cards integrated into laptop motherboards had highly unequal characteristics (up to 23 dB). In the case of one of the laptops, the upper range of the transmitted frequencies was limited to the 12,500 Hz band. An external sound card or wireless headphones can improve the situation. In the worst-case scenario, i.e., rock music, the listening time was limited to 2 h and 18 min. Full article
(This article belongs to the Collection Occupational Safety and Personal Protective Equipment)
Show Figures

Figure 1

10 pages, 1194 KB  
Article
Mobile In-Ear Power Sensor for Jaw Joint Activity
by Jacob Bouchard-Roy, Aidin Delnavaz and Jérémie Voix
Micromachines 2020, 11(12), 1047; https://doi.org/10.3390/mi11121047 - 27 Nov 2020
Cited by 1 | Viewed by 2841
Abstract
In only a short time, in-ear wearables have gone from hearing aids to a host of electronic devices such as wireless earbuds and digital earplugs. To operate, these devices rely exclusively on batteries, which are not only cumbersome but known for several drawbacks. [...] Read more.
In only a short time, in-ear wearables have gone from hearing aids to a host of electronic devices such as wireless earbuds and digital earplugs. To operate, these devices rely exclusively on batteries, which are not only cumbersome but known for several drawbacks. In this paper, the earcanal dynamic movements generated by jaw activity are evaluated as an alternative source of energy that could replace batteries. A mobile in-ear power sensor device capable of measuring jaw activity metrics is prototyped and tested on three test subjects. The test results are subsequently analyzed using a detection algorithm to detect the jaw activity based on the captured audio signals and to classify them into four main categories, namely chewing, swallowing, coughing and talking. The mean power associated with each category of activity is then calculated by using the pressure signals as measured by a water-inflated earplug subjected to earcanal dynamic movement. The results show that 3.8 mW of power, achieved mainly by the chewing movement, is readily available on average from within the earcanal. Full article
(This article belongs to the Special Issue Micro/Nano- Scale Energy Harvester)
Show Figures

Figure 1

Back to TopTop