Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing
Abstract
:1. Introduction
2. Cognitive Mechanism behind the Perceptual Doping Phenomenon
3. Perceptual Doping and Aural Rehabilitation of People with Hearing Difficulties
3.1. Perceptual Doping and Controlling Experimental Setup for Collecting Auditory and Audiovisual Speech Data
3.2. Suggestions for Future Studies
- (1)
- The extent to which a short period of exposure to audiovisual speech stimuli facilitates visual-only and/or audiovisual speech processing for correct identification is an interesting research topic for future studies. Knowledge is scarce concerning how audiovisual speech stimulation can enhance the processing of visual-only speech cues for correct identification, particularly in people with hearing loss. In face-to-face communication, people with hearing difficulties rely more on visual speech cues, as access to the auditory component of audiovisual speech signals is limited by background noise or hearing loss.
- (2)
- The extent to which audiovisual speech training can improve cognitive function is another interesting research topic for future studies. Fergusson and colleagues [25,26] were the first to show that auditory training can improve cognitive function in people with hearing loss. Hence, the question arises: can audiovisual training result in better cognitive functioning in people with hearing loss than with auditory (or auditory-cognitive training)?
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Sumby, W.H.; Pollack, I. Visual contribution to speech intelligibility in noise. J. Acoust. Soc. Am. 1954, 26, 212–215. [Google Scholar] [CrossRef] [Green Version]
- Erber, N.P. Interaction of audition and vision in the recognition of oral speech stimuli. J. Speech Hear. Res. 1969, 12, 423–425. [Google Scholar] [CrossRef] [PubMed]
- MacLeod, A.; Summerfield, Q. Quantifying the contribution of vision to speech perception in noise. Br. J. Audiol. 1987, 21, 131–141. [Google Scholar] [CrossRef] [PubMed]
- Moradi, S.; Lidestam, B.; Rönnberg, J. Gated audiovisual speech identification in silence vs. noise: Effects on time and accuracy. Front. Psychol. 2013, 4, 359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Moradi, S.; Wahlin, A.; Hällgren, M.; Rönnberg, J.; Lidestam, B. The efficacy of short-term gated audiovisual speech training for improving auditory sentence identification in noise in elderly hearing aid users. Front. Psychol. 2017, 8, 368. [Google Scholar] [CrossRef] [Green Version]
- Bertelson, P.; Vroomen, J.; De Gelder, B. Visual recalibration of auditory speech identification: A McGurk aftereffect. Psychol. Sci. 2003, 14, 592–597. [Google Scholar] [CrossRef]
- Van Linden, S.; Vroomen, J. Recalibration of phonetic categories by lipread speech versus lexical information. J. Exp. Psychol. Hum. Percept. Perform. 2007, 33, 1483. [Google Scholar] [CrossRef] [Green Version]
- Hällgren, M.; Larsby, B.; Arlinger, S. A Swedish version of the Hearing In Noise Test (HINT) for measurement of speech recognition. Int. J. Audiol. 2006, 45, 227–237. [Google Scholar] [CrossRef]
- Lidestam, B.; Moradi, S.; Pettersson, R.; Ricklefs, T. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification. J. Acoust. Soc. Am. 2014, 136, EL142–EL147. [Google Scholar] [CrossRef] [Green Version]
- Rönnberg, J.; Lunner, T.; Ng, E.H.; Lidestam, B.; Zekveld, A.A.; Sörqvist, P.; Lyxell, B.; Träff, U.; Yumba, W.; Classon, E.; et al. Hearing impairment, cognition and speech understanding: Exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. Int. J. Audiol. 2016, 55, 623–642. [Google Scholar] [CrossRef]
- Moradi, S.; Lidestam, B.; Ng, E.H.N.; Danielsson, H.; Rönnberg, J. Perceptual doping: An audiovisual facilitation effect on auditory speech processing, from phonetic feature extraction to sentence identification in noise. Ear Hear. 2019, 40, 312. [Google Scholar] [CrossRef]
- Norris, D.; McQueen, J.M.; Cutler, A. Perceptual learning in speech. Cogn. Psychol. 2003, 47, 204–238. [Google Scholar] [CrossRef] [Green Version]
- Signoret, C.; Johnsrude, I.; Classon, E.; Rudner, M. Combined effects of form- and meaning-based predictability on perceived clarity of speech. J. Exp. Psychol. Hum. Percept. Perform. 2018, 44, 277–285. [Google Scholar] [CrossRef]
- Rönnberg, J.; Holmer, E.; Rudner, M. Cognitive hearing science: Three memory systems, two approaches, and the Ease of Language Understanding model. J. Speech Lang. Hear. Res. 2021, 64, 359–370. [Google Scholar] [CrossRef]
- Rönnberg, J.; Signoret, C.; Andin, J.; Holmer, E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front. Psychol. 2022, 13, 967260. [Google Scholar] [CrossRef]
- Rönnberg, J.; Lunner, T.; Zekveld, A.; Sörqvist, P.; Danielsson, H.; Lyxell, B.; Dahlström, O.; Signoret, C.; Stenfelt, S.; Pichora-Fuller, M.K.; et al. The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Front. Syst. Neurosci. 2013, 7, 31. [Google Scholar] [CrossRef] [Green Version]
- van Wassenhove, V.; Grant, K.W.; Poeppel, D. Visual speech speeds up the neural processing of auditory speech. Proc. Natl. Acad. Sci. USA 2005, 102, 1181–1186. [Google Scholar] [CrossRef] [Green Version]
- Golumbic, E.Z.; Cogan, G.B.; Schroeder, C.E.; Poeppel, D. Visual input enhances selective speech envelope tracking in auditory cortex at a “cocktail party”. J. Neurosci. 2013, 33, 1417–1426. [Google Scholar] [CrossRef] [Green Version]
- Mégevand, P.; Mercier, M.R.; Groppe, D.M.; Golumbic, E.Z.; Mesgarani, N.; Beauchamp, M.S.; Mehta, A.D. Crossmodal phase reset and evoked responses provide complementary mechanisms for the influence of visual speech in auditory cortex. J. Neurosci. 2020, 40, 8530–8542. [Google Scholar] [CrossRef]
- Frei, V.; Schmitt, R.; Meyer, M.; Giroud, N. Visual speech cues enhance neural speech tracking in right auditory cluster leading to improvement in speech in noise comprehension in older adults with hearing impairment. Authorea 2023. [Google Scholar] [CrossRef]
- Stropahl, M.; Besser, J.; Launer, S. Auditory training supports auditory rehabilitation: A state-of-the-art review. Ear Hear. 2020, 41, 697–704. [Google Scholar] [CrossRef] [PubMed]
- Rao, A.; Rishiq, D.; Yu, L.; Zhang, Y.; Abrams, H. Neural correlates of selective attention with hearing aid use followed by ReadMyQuips auditory training program. Ear Hear. 2017, 38, 28–41. [Google Scholar] [CrossRef] [PubMed]
- Tye-Murray, N.; Spehar, B.; Sommers, M.; Mauzé, E.; Barcroft, J.; Grantham, H. Teaching children with hearing loss to recognize speech: Gains made with computer-based auditory and/or speechreading training. Ear Hear. 2022, 43, 181–191. [Google Scholar] [CrossRef] [PubMed]
- Sato, T.; Yabushita, T.; Sakamoto, S.; Katori, Y.; Kawase, T. In-home auditory training using audiovisual stimuli on a tablet computer: Feasibility and preliminary results. Auris Nasus Larynx 2020, 47, 348–352. [Google Scholar] [CrossRef]
- Ferguson, M.A.; Henshaw, H.; Clark, D.P.; Moore, D.R. Benefits of phoneme discrimination training in a randomized controlled trial of 50- to 74-year-olds with mild hearing loss. Ear Hear. 2014, 35, e110–e121. [Google Scholar] [CrossRef] [Green Version]
- Ferguson, M.A.; Henshaw, H. Auditory training can improve working memory, attention, and communication in adverse conditions for adults with hearing loss. Front. Psychol. 2015, 6, 556. [Google Scholar] [CrossRef] [Green Version]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Moradi, S.; Rönnberg, J. Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing. Brain Sci. 2023, 13, 601. https://doi.org/10.3390/brainsci13040601
Moradi S, Rönnberg J. Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing. Brain Sciences. 2023; 13(4):601. https://doi.org/10.3390/brainsci13040601
Chicago/Turabian StyleMoradi, Shahram, and Jerker Rönnberg. 2023. "Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing" Brain Sciences 13, no. 4: 601. https://doi.org/10.3390/brainsci13040601
APA StyleMoradi, S., & Rönnberg, J. (2023). Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing. Brain Sciences, 13(4), 601. https://doi.org/10.3390/brainsci13040601