1. Background
Hearing loss is a prevalent health issue that affects individuals from various cultures, races, and age groups. The negative impact on a person’s quality of life can be considerable, particularly if the condition goes undiagnosed [
1]. For children, undetected hearing loss can impede development, and early identification and intervention are critical for them to acquire essential skills at a similar pace to their peers. Thus, it is imperative to detect any form of hearing loss as early as possible [
2]. Research has revealed that hearing loss can range from mild to profound, and that a broad spectrum of contributing factors exists [
3,
4,
5,
6].
Hearing loss can be categorized as either sensorineural or conductive, with sensorineural hearing loss affecting the inner ear and nervous system, while conductive hearing loss can be caused by malformations or diseases of the outer ear, the ear canal, or the middle ear structure. Both sensorineural and conductive hearing loss can interfere with cognitive development, interfering with auditory processing and binaural hearing development [
7,
8]. Binaural hearing is essential for sound localization, sound segregation, and understanding speech in noisy environments. Any dysfunction in one or both ears can disrupt the mechanism of binaural processing, resulting in difficulties in sound perception [
9].
Binaural hearing is the ability to integrate information received simultaneously from both ears, enabling individuals to identify, locate, and separate sound sources. This ability is crucial for understanding speech in noisy environments, known as the “cocktail party effect” [
10]. Any dysfunction in one or both ears, including unilateral hearing loss (UHL) and bilateral hearing loss (BHL) can disrupt the binaural processing mechanisms [
11]. Previous research has shown that untreated binaural hearing impairment can significantly impede a child’s development, including verbal cognition skills [
12]. Binaural hearing and other aspects of auditory functioning develop before adolescence. Adults can suffer from binaural hearing impediments, which can affect their quality of life and limit further cognitive development. As age increases, neural synchrony in the central auditory system deteriorates, contributing to the difficulty in perceiving temporal cues of sound for older people [
13,
14].
Auditory processing begins in the brainstem and mesencephalon, where responses such as auditory reflexes are coordinated, followed by processing in the auditory cortex of the temporal lobe [
15]. Binaural hearing processing in the auditory cortex contributes to sound localization, differentiation, and delineation of important auditory sources and noise. Binaural hearing and other aspects of auditory functioning mature throughout adolescence and require auditory stimuli for full development. Without this input, auditory functioning, and therefore cognitive development, is at risk [
16].
Testing for hearing impediments is diverse and well documented. While psychometric methods are commonly used, measures such as electroencephalograms (EEGs) have proven their ability to detect hearing disorders in infants [
17]. The EEG method eliminates reliance on participant literacy and communication skills and is relatively easy to perform [
18], making it highly suitable for young children [
19]. Although collecting an EEG can be more challenging than a traditional hearing assessment, using the correct stimulus parameters, recording standards, and processing techniques can make this method suitable and feasible for binaural hearing assessment [
20,
21].
Auditory evoked potentials (AEPs) refer to the electrical potentials from any part of the auditory system, from the cochlea to the cerebral cortex, which are evoked by externally presented auditory stimuli. AEPs can be used to assess neurological integrity and auditory function [
22]. Auditory evoked potentials (AEPs) are an essential tool for assessing auditory function and diagnosing hearing disorders [
23,
24]. AEPs are non-invasive and can provide valuable information about the function and integrity of the auditory system [
24]. AEPs are categorized based on the time interval between the onset of the auditory stimulus and the peak of the evoked response [
25]. The major AEP components are the auditory brainstem response (ABR), the middle latency response (MLR), and the late latency response (LLR) [
26].
The ABR is the earliest AEP component, occurring within the first 10 ms after the presentation of an auditory stimulus. It reflects the activity of the auditory nerve and the brainstem and is commonly used to assess hearing sensitivity and diagnose hearing disorders [
27]. The MLR is a later AEP component, occurring between 10 and 80 ms after the presentation of an auditory stimulus [
28,
29]. It reflects the activity of the auditory cortex and is thought to be involved in the processing of complex auditory stimuli, including speech. A recent study [
30] investigated the use of the MLR to evaluate auditory processing in children with autism spectrum disorders (ASD). They found that the MLR was reduced in children with ASD compared to typically developing children, suggesting that the MLR can be a useful tool for identifying auditory processing deficits in individuals with ASD. The LLR is the latest AEP component, occurring between 50 and 500 ms after the presentation of an auditory stimulus. It reflects the activity of higher-order auditory processing areas, including the temporal and frontal lobes, and is thought to be involved in cognitive and attentional processing of auditory stimuli. In recent years the use of LLRs to investigate various hearing disorders has been explored [
31,
32]. AEPs have also been used to investigate the effects of different types of noise on auditory processing and to evaluate the neural mechanisms underlying sensory integration. Overall, AEPs are a valuable tool for evaluating auditory function and diagnosing hearing disorders [
33].
The ABR, MLR, and LLR are the major AEP components, each reflecting different parts of the auditory pathway’s electrical activity. AEPs have the potential to provide important information for the management and treatment of hearing disorders, as well as to advance our understanding of the neural mechanisms underlying auditory perception and cognition [
34,
35,
36]. Further research is needed to fully understand the neural mechanisms underlying AEPs and their clinical and scientific implications.
The use of auditory evoked potentials (AEPs) in investigating binaural hearing has led to the identification of MLR and LLR as two main components of interest. Research related to the MLR has primarily focused on its sensitivity to binaural processing. Studies have shown that the MLR is modulated by differences in interaural time and intensity cues, which are important for sound localization and binaural fusion [
29]. Furthermore, the MLR has been shown to be associated with the perceptual grouping of sounds, such as the segregation of speech from background noise [
37,
38]. However, the literature on the MLR in binaural hearing is limited, and more research is required to comprehend its role in binaural hearing. The LLR has also been studied in the context of binaural hearing [
39]. Meanwhile, studies have shown that the LLR is modulated by binaural cues, including interaural time and intensity differences, and is sensitive to the spatial location of sounds [
40], suggesting its potential role in auditory scene analysis through its sensitivity to changes in binaural cues over time [
41]. However, the literature on the LLR in binaural hearing is still limited, demanding further research to fully understand its contribution to binaural processing.
The literature on the EEG and AEPs is diverse, but there is a lack of relevant literature regarding binaural hearing assessment and testing stimuli for AEP responses, indicating an avenue for future studies. A review of the literature reveals a gap in knowledge regarding the use of the EEG for binaural assessment. Specifically, research pertaining to the Auditory MLR and LLR indicates a need for further investigation and contribution to the existing body of knowledge. This study aims to fill this gap by exploring the potential of AEPs for binaural hearing assessment through MLR and LLR analysis.
ERP signals from the brain can be analysed in different ways which includes time domain analysis by examining AEPs, or the frequency domain analysis that uses methods such as FFT (Fast Fourier Transform) or Welch’s Periodogram (Pwelch) [
42,
43,
44]. The Pwelch method, a spectral decomposition technique, calculates the Power Spectral Density (PSD) for EEG data, providing valuable information about the spectral content and power across different frequency bands. It reduces variance and allows for high accuracy and resolution in PSD estimation, making it useful in ERP data analysis with low signal-to-noise ratios [
45]. The Pwelch method can handle signals with non-uniform sampling rates and offers insights into the underlying neural mechanisms of cognitive and sensory processing [
44]. AEPs can be analysed in the time domain through averaging techniques, allowing for the examination of peak amplitudes, latencies, and interpeak latency differences under various conditions [
46,
47]. Overall, the AEP analysis and Pwelch method in both time and frequency domains are essential in understanding brain function and neurological disorders.
This study aims to investigate the use of audiometric EEGs as an objective method to detect binaural hearing. Limited research on the MLR and LLR with binaural hearing stimuli creates an opportunity for novel contributions to the existing knowledge. For this study, the BMLD (Binaural Masking Level Difference) test, which has been recommended for measuring binaural hearing loss, has been taken as a starting point. The stimuli used in the BMLD trials are employed in the current Audiometric EEG study to investigate the impact of binaural phase shifts of the auditory stimuli in the presence of noise.
4. Discussion
The findings of the current study suggest that for signals masked in noise, phase changes can have a significant effect on binaural processing in the human brain, as measured by auditory evoked potentials (AEPs). Our results showed that there were statistically significant differences between the AEP signals generated by antiphasic and homophasic stimuli, in both time and frequency domain features. The differences suggest that the brain can detect and process interaural phase differences, which may be important for spatial localization of sound sources and other aspects of auditory processing. The results are consistent with previous studies [
67] that have shown that binaural stimulation results in larger cortical responses compared to monaural stimuli, and that the amplitude and latency of AEPs are dependent on the binaural difference. However, our study is unique in its focus on phase changes and its use of stimuli with frequency and noise parameters. The findings may improve our understanding of binaural processing in the human brain and lead to applications in the development of new objective hearing tests in the future. Our results indicate that the detection of phase differences may be an important factor in the “cocktail party” effect, whereby listeners are able to focus on a particular sound source in a noisy environment [
68].
In the frequency domain study, we used the Pwelch method to calculate the power spectral density values of the MLR and LLR signals in various frequency bands to investigate the significance of phase differences in binaural processing [
69]. Our results showed that the 20–25 Hz and 25–30 Hz frequency bands of the MLR and LLR signals had a significant difference for antiphasic and homophasic stimuli. These frequency bands correspond to the high beta and low gamma frequency range of the EEG [
70]. The finding is consistent with previous research suggesting that sensory integration results in frequencies in the high beta and low gamma range, which may indicate conscious and accurate phase detection of auditory stimuli [
70,
71]. Further analysis revealed that the stimuli which resulted in statistically significant differences were 500 Hz for the MLR signals, and 250 Hz and 500 Hz for the LLR signals, mainly in the 20–25 Hz and 25–30 Hz frequency bands. These findings suggest that optimal binaural processing occurs at 500 Hz, in line with previous literature predicting that lower frequencies result in larger binaural masking level differences (BMLD) [
72,
73].
We also analysed the electrode locations that provided more significant differences between the homophasic and antiphasic condition. Our results show that the midline electrodes provided more significant differences in the MLR [
74], while both the midline and left electrodes provided significant differences for the LLR signals [
75]. This finding may indicate that midline electrodes could be more suitable to investigate the processing of pure tone stimuli in binaural hearing [
76]. The left hemisphere of the brain, which is known to be important for processing the temporal aspects of sound, may also be involved in this processing [
77]. It is also in agreement with the finding of Ross et al. [
78], where the author confirms the dominance of hemispheric contribution in processing auditory stimuli in noisy environments.
The present study also investigated phase-sensitive binaural hearing using the time domain ERP peak analysis. The results revealed that the Na peak of the MLR for 500 Hz stimuli, the N1 peak of the LLR for 500 Hz stimuli, and the P300 peak of the LLR for 250 Hz stimuli show a statistically significant difference between the antiphasic and the homophasic condition for subject-wise analysis, indicating the importance of phase differences in binaural hearing. It has been suggested that neural functioning in the thalamo-cortical level (bottom up) and neurocognitive functions (top down) are related to phase-sensitive stimuli masked in noise for binaural hearing [
79,
80]. Furthermore, our study highlights the importance of the N1 peak and the P300 peak of LLR in the analysis of binaural hearing and their potential use as measures of cortical processing of IPD [
80,
81]. The Na peak of the MLR, the N1 peak of the LLR, and the P300 peak of the LLR are important components in the analysis of the relevance of interaural phase differences (IPD) for binaural hearing. The peaks are believed to reflect the processing of IPD at different levels of the auditory system, starting from the midbrain regions to the auditory cortex, and finally, the attentional and working memory systems. The Na peak of MLR is believed to represent the processing of IPD in the midbrain regions, specifically in the superior olivary complex and lateral lemniscus [
82]. This peak is sensitive to small differences in IPD, making it an important measure for studying spatial hearing in binaural hearing tasks [
83,
84]. The N1 peak of LLR reflects cortical processing of IPD, particularly in the auditory cortex. This peak may represent the neural processing of the differences in the timing of the sound wave between the two ears at a higher level of the auditory system. Reduced N1 peak amplitudes may suggest a possible deficit in cortical processing of IPD, which may contribute to difficulties in discriminating tones and non-speech sounds from noise [
85]. In literature, the N1 peak has been identified as a physiological index of the ability to “tune in” one’s attention to a single sound source when there are several competing sources in a noisy environment, again referring to the ‘cocktail party effect’ [
86,
87]. Enhancement of the N1 component for tasks which require selective attention has also been described in the literature [
46] and is in line with the current study’s findings. The P300 peak of LLR may reflect cognitive processing of IPD, particularly in attentional and working memory systems. Several studies have stated the importance of the P300 component in analysing binaural hearing for normal adults as well as in adults with central processing disorders [
88,
89]. Reduced P300 peak amplitudes may suggest a possible deficit in cognitive processing of IPD and impaired attentional and working memory functions.
The frequency domain analysis results suggest that the brain is capable of detecting and processing phase differences in binaural hearing, particularly in the high beta and low gamma frequency range, and that optimal binaural processing occurs for 500 Hz stimuli, based on the MLR results. Additionally, our results provide guidance on the selection of electrode locations for future binaural hearing studies. For time domain analysis, the Na peak of the MLR and N1 peak of the LLR for 500 Hz stimuli can be used as markers in objective studies of binaural hearing. The P300 peaks of the LLR for 250 Hz stimuli also may contribute to objective measures for binaural hearing.
5. Conclusions
In conclusion, the current study explored the role of interaural phase differences in binaural processing in noise and their neural correlates in the human brain. The results demonstrated significant differences between auditory evoked potentials (AEP) generated by antiphasic and homophasic stimuli in both time and frequency domains. These findings highlight the brain’s ability to detect and process interaural phase differences, crucial for sound source localization and other auditory processing aspects.
Frequency domain analysis revealed significant differences in the middle latency response (MLR) signals for 500 Hz stimuli, while both 250 Hz and 500 Hz stimuli showed significant differences in the late latency response (LLR) signals, particularly in the 20–25 Hz and 25–30 Hz frequency bands. This suggests optimal binaural processing at 500 Hz, specifically in the high beta–low gamma frequency range, known for sensory integration. Additionally, midline electrodes proved more effective for investigating binaural processing of pure tone stimuli, yielding significant differences in MLR signals, while both the midline and left electrodes showed significant differences in LLR signals.
Furthermore, time domain analysis identified the Na peak of the MLR and N1 peak of the LLR for 500 Hz stimuli as significant markers for responses to homophasic and antiphasic stimuli, with potential applications in objective studies of binaural hearing. The P300 peak of the LLR for 250 Hz stimuli also exhibited strong significance between responses to homophasic and antiphasic stimuli, suggesting it might be considered as an objective measure for binaural hearing.
Future research can expand on these findings to explore the clinical implications of binaural processing in hearing disorders and related conditions.