Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = nonverbal connection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1772 KB  
Article
The Impact of Emotion Perception and Gaze Sharing on Collaborative Experience and Performance in Multiplayer Games
by Lu Yin, He Zhang and Renke He
J. Eye Mov. Res. 2026, 19(2), 34; https://doi.org/10.3390/jemr19020034 - 25 Mar 2026
Viewed by 306
Abstract
Compared to traditional offline collaboration, current online collaboration often lacks nonverbal social cues, resulting in lower efficiency and a reduced emotional connection between teammates. To address this issue, this study used a two-player collaborative puzzle game as the experimental setting to explore the [...] Read more.
Compared to traditional offline collaboration, current online collaboration often lacks nonverbal social cues, resulting in lower efficiency and a reduced emotional connection between teammates. To address this issue, this study used a two-player collaborative puzzle game as the experimental setting to explore the impact of two nonverbal social cues, emotion and gaze, on collaborative experience and performance. Specifically, this study designed four collaborative modes: with and without teammates’ facial expressions, and with and without teammates’ gaze points. Sixty-two participants took part in the experiment, and each pair was required to complete these four patterns. Subsequently, we analyzed their collaborative experience through subjective questionnaires, objective facial expressions, and gaze overlap rates. The experimental results revealed that teammates’ gaze could effectively enhance collaborative efficiency, while facial expression is key to optimizing subjective experience. Combining both cues further acquires advantages in cognitive and emotional dimensions, leading to improved performance outcomes. The study also indicated that facial expressions could alleviate the social pressure triggered by shared gaze from teammates. Additionally, the study also examined how personality differences influenced collaborative experiences and performance. The results indicated that individuals with high agreeableness actively seek social cues, leading to more positive collaborative experiences. This study provides empirical evidence for understanding the interactive mechanisms of cognitive and emotional processes during online collaboration, and points the way toward designing adaptive, personalized intelligent collaborative systems. Full article
Show Figures

Figure 1

16 pages, 53570 KB  
Article
A Multimodal In-Ear Audio and Physiological Dataset for Swallowing and Non-Verbal Event Classification
by Elyes Ben Cheikh, Yassine Mrabet, Catherine Laporte and Rachel E. Bouserhal
Sensors 2026, 26(7), 2019; https://doi.org/10.3390/s26072019 - 24 Mar 2026
Viewed by 365
Abstract
Swallowing is a critical marker of neurological and emotional health. The ability to monitor it continuously and non-invasively, especially through smart ear-worn devices, holds significant promise for clinical applications. Despite this potential, no public audio datasets currently support reliable swallowing sound detection. Existing [...] Read more.
Swallowing is a critical marker of neurological and emotional health. The ability to monitor it continuously and non-invasively, especially through smart ear-worn devices, holds significant promise for clinical applications. Despite this potential, no public audio datasets currently support reliable swallowing sound detection. Existing datasets focus primarily on speech and breathing, offering limited coverage and lacking detailed annotations for swallowing events. To address this gap, we introduce an in-ear audio dataset specifically designed to capture a wide range of verbal and non-verbal sounds. It includes comprehensive labeling focused on swallowing. The dataset was collected from 34 healthy adults (14 females and 20 males) between the ages of 20 and 29. Each participant performed a series of predefined tasks involving both non-verbal and verbal events. Non-verbal tasks included swallowing, clicking, forceful blinking, touching the scalp, and physical movements such as squatting or walking in place. Verbal tasks consisted of speaking (e.g., describing an image). Recordings were conducted in both quiet and noisy environments to better reflect real-world conditions. Data were captured using a combination of in-/outer-ear microphones, a chest belt to record electrocardiogram (ECG), respiration and acceleration signals, and an ultrasound probe to track tongue movement, which served as a reference for swallowing annotation. All signals were precisely synchronized. To ensure high data quality, the recordings were reviewed using both algorithmic analysis and manual inspection. Swallowing events were identified based on ultrasound signals and validated by an expert to guarantee accurate labeling. As a proof of concept that in-ear audio supports swallow classification, we fine-tune a fully connected neural network on YAMNet embeddings plus zero-crossing rate (ZCR) features. Across the completed folds, the model reaches an F1 score of 0.875 ± 0.013. Full article
(This article belongs to the Special Issue Sensors for Physiological Monitoring and Digital Health: 2nd Edition)
Show Figures

Figure 1

13 pages, 230 KB  
Article
Non-Verbal Communication in Nursing Home Settings
by Zunera Khan, Miguel Vasconcelos Da Silva, Daniel Kramarczyk, Lise Birgitte Holteng Austbø, Martha Therese Gjestsen, Ingelin Testad and Clive Ballard
Healthcare 2026, 14(5), 614; https://doi.org/10.3390/healthcare14050614 - 28 Feb 2026
Viewed by 596
Abstract
Background: People living with dementia in nursing homes commonly experience progressive impairments in cognition, communication, and functional ability, contributing to neuropsychiatric symptoms and reduced quality of life. As verbal communication declines, non-verbal communication (NVC) including facial expressions, gestures, eye contact, posture, and touch [...] Read more.
Background: People living with dementia in nursing homes commonly experience progressive impairments in cognition, communication, and functional ability, contributing to neuropsychiatric symptoms and reduced quality of life. As verbal communication declines, non-verbal communication (NVC) including facial expressions, gestures, eye contact, posture, and touch becomes increasingly important for maintaining meaningful interactions. Objectives: This study aims to explore current NVC practices between nursing home staff and residents living with dementia. Methods: A mixed methods, cross-sectional design was employed. NH staff completed an anonymous online questionnaire consisting of 13 items assessing NVC use and demographic characteristics. Quantitative items were rated using Likert scales, and qualitative responses were analysed using Giorgi’s phenomenological approach. Results: Quantitative findings showed that residents most frequently relied on facial expressions, reported as used very often in 24 of 33 NHs, followed by eye contact in 17 NHs and touch in 16 NHs. NH staff also reported extensive use of NVC during care interactions, particularly facial expressions (very often in 79% of NHs), eye contact (82%), and hand gestures (76%). Qualitative findings underscored the central role of NVC in interpreting residents’ needs, fostering emotional connection, and managing behavioural and psychological symptoms of dementia through subtle cues, visual prompts, and individualised strategies. Conclusions: Overall, the findings demonstrate that NVC is a fundamental component of communication and care delivery in dementia settings and highlight the need for structured training interventions to support staff in recognising and responding effectively to non-verbal signals. Full article
36 pages, 7640 KB  
Article
Predicting and Synchronising Co-Speech Gestures for Enhancing Human–Robot Interactions Using Deep Learning Models
by Enrique Fernández-Rodicio, Christian Dondrup, Javier Sevilla-Salcedo, Álvaro Castro-González and Miguel A. Salichs
Biomimetics 2025, 10(12), 835; https://doi.org/10.3390/biomimetics10120835 - 13 Dec 2025
Cited by 2 | Viewed by 710
Abstract
In recent years, robots have started to be used in tasks involving human interaction. For this to be possible, humans must perceive robots as suitable interaction partners. This can be achieved by giving the robots an animate appearance. One of the methods that [...] Read more.
In recent years, robots have started to be used in tasks involving human interaction. For this to be possible, humans must perceive robots as suitable interaction partners. This can be achieved by giving the robots an animate appearance. One of the methods that can be utilised to endow a robot with a lively appearance is giving it the ability to perform expressions on its own, that is, combining multimodal actions to convey information. However, this can become a challenge if the robot has to use gestures and speech simultaneously, as the non-verbal actions need to support the message communicated by the verbal component. In this manuscript, we present a system that, based on a robot’s utterances, predicts the corresponding gesture and synchronises it with the speech. A deep learning-based prediction model labels the robot’s speech with the types of expressions that should accompany it. Then, a rule-based synchronisation module connects different gestures to the correct parts of the speech. For this, we have tested two different approaches: (i) using a combination of recurrent neural networks and conditional random fields; and (ii) using transformer models. The results show that the proposed system can properly select co-speech gestures under the time constraints imposed by real-world interactions. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 4th Edition)
Show Figures

Figure 1

14 pages, 279 KB  
Article
Breaking the Silence: A Narrative of the Survival of Afghan’s Music
by Ângela Teles and Paula Guerra
Soc. Sci. 2025, 14(9), 549; https://doi.org/10.3390/socsci14090549 - 15 Sep 2025
Viewed by 1323
Abstract
Humanity currently faces a state of crisis, as it navigates the challenges of a quickly evolving world. The increasing number of conflicts and wars has had serious repercussions on human life, contributing to the displacement of populations and a growing influx of refugees. [...] Read more.
Humanity currently faces a state of crisis, as it navigates the challenges of a quickly evolving world. The increasing number of conflicts and wars has had serious repercussions on human life, contributing to the displacement of populations and a growing influx of refugees. The high number of children and young people among this group requires urgent action to meet their needs for education, health, and a secure upbringing. Music education provides one platform for unique expression and identity for these age groups. In 2022, nearly a hundred young musicians from Afghanistan were welcomed into the cities of Braga and Guimarães in Portugal. They work to defend their culture through orchestral activity which has achieved international reach, thanks to the work of the Afghanistan National Institute of Music (ANIM). This article examines how music connects Afghan refugee youth with host communities. It focuses on the role of musical practice in fostering integration within schools and the broader urban context. Using a qualitative approach, based on ethnographic observation of this orchestra’s rehearsals, this article explores the concept of affordances. Ethnographic observation was conducted throughout school activities, music workshops, and informal interactions during break periods. Field notes focused on participants’ non-verbal expressions, musical engagement, and interactions with both peers and educators. These observations were used to contextualise the interviews and triangulate the data. This theoretical–analytical approach shows that, for these youngsters, music plays a mediating role regarding social actions and experiences, shaping new subjectivities and their externalisations. It is a technology of the self, of (re)adaptation, resistance, and identity re-emergence. The main argument is that ANIM’s music in action is a communication tool that, like migratory processes, reconfigures the identities of its protagonists. Music has been demonstrated to function as a catalyst for connection, predominantly within the context of ensemble and orchestra rehearsals, serving as a shared language. Full article
25 pages, 19135 KB  
Article
Development of a Multi-Platform AI-Based Software Interface for the Accompaniment of Children
by Isaac León, Camila Reyes, Iesus Davila, Bryan Puruncajas, Dennys Paillacho, Nayeth Solorzano, Marcelo Fajardo-Pruna, Hyungpil Moon and Francisco Yumbla
Multimodal Technol. Interact. 2025, 9(9), 88; https://doi.org/10.3390/mti9090088 - 26 Aug 2025
Viewed by 2220
Abstract
The absence of parental presence has a direct impact on the emotional stability and social routines of children, especially during extended periods of separation from their family environment, as in the case of daycare centers, hospitals, or when they remain alone at home. [...] Read more.
The absence of parental presence has a direct impact on the emotional stability and social routines of children, especially during extended periods of separation from their family environment, as in the case of daycare centers, hospitals, or when they remain alone at home. At the same time, the technology currently available to provide emotional support in these contexts remains limited. In response to the growing need for emotional support and companionship in child care, this project proposes the development of a multi-platform software architecture based on artificial intelligence (AI), designed to be integrated into humanoid robots that assist children between the ages of 6 and 14. The system enables daily verbal and non-verbal interactions intended to foster a sense of presence and personalized connection through conversations, games, and empathetic gestures. Built on the Robot Operating System (ROS), the software incorporates modular components for voice command processing, real-time facial expression generation, and joint movement control. These modules allow the robot to hold natural conversations, display dynamic facial expressions on its LCD (Liquid Crystal Display) screen, and synchronize gestures with spoken responses. Additionally, a graphical interface enhances the coherence between dialogue and movement, thereby improving the quality of human–robot interaction. Initial evaluations conducted in controlled environments assessed the system’s fluency, responsiveness, and expressive behavior. Subsequently, it was implemented in a pediatric hospital in Guayaquil, Ecuador, where it accompanied children during their recovery. It was observed that this type of artificial intelligence-based software, can significantly enhance the experience of children, opening promising opportunities for its application in clinical, educational, recreational, and other child-centered settings. Full article
Show Figures

Graphical abstract

13 pages, 1420 KB  
Article
Comparison of Prototype Transparent Mask, Opaque Mask, and No Mask on Speech Understanding in Noise
by Samuel R. Atcherson, Evan T. Finley and Jeanne Hahne
Audiol. Res. 2025, 15(4), 103; https://doi.org/10.3390/audiolres15040103 - 11 Aug 2025
Cited by 1 | Viewed by 1683
Abstract
Background: Face masks are used in healthcare for the prevention of the spread of disease; however, the recent COVID-19 pandemic raised awareness of the challenges of typical opaque masks that obscure nonverbal cues. In addition, various masks have been shown to attenuate speech [...] Read more.
Background: Face masks are used in healthcare for the prevention of the spread of disease; however, the recent COVID-19 pandemic raised awareness of the challenges of typical opaque masks that obscure nonverbal cues. In addition, various masks have been shown to attenuate speech above 1000 Hz, and lack of nonverbal cues exacerbates speech understanding in the presence of background noise. Transparent masks can help to overcome the loss of nonverbal cues, but they have greater attenuative effects on higher speech frequencies. This study evaluated a newer prototype transparent face mask redesigned from a version evaluated in a previous study. Methods: Thirty participants (10 with normal hearing, 10 with moderate hearing loss, and 10 with severe-to-profound hearing loss) were recruited. Selected lists from the Connected Speech Test (CST) were digitally recorded using male and female talkers and presented to listeners at 65 dB HL in 12 conditions against a background of 4-talker babble (+5 dB SNR): without a mask (auditory only and audiovisual), with an opaque mask (auditory only and audiovisual), and with a transparent mask (auditory only and audiovisual). Results: Listeners with normal hearing performed consistently well across all conditions. For listeners with hearing loss, speech was generally easier to understand with the male talker. Audiovisual conditions were better than auditory-only conditions, and No Mask and Transparent Mask conditions were better than Opaque Mask conditions. Conclusions: These findings continue to support the use of transparent masks to improve communication, minimize medical errors, and increase patient satisfaction. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

24 pages, 9657 KB  
Article
Electroencephalography-Based Pain Detection Using Kernel Spectral Connectivity Network with Preserved Spatio-Frequency Interpretability
by Santiago Buitrago-Osorio, Julian Gil-González, Andrés Marino Álvarez-Meza, David Cardenas-Peña and Alvaro Orozco-Gutierrez
Appl. Sci. 2025, 15(9), 4804; https://doi.org/10.3390/app15094804 - 26 Apr 2025
Cited by 1 | Viewed by 2012
Abstract
Chronic pain leads to not only physical discomfort but also psychological challenges, such as depression and anxiety, which contribute to a substantial healthcare burden. Pain detection and assessment remains a challenge due to its subjective nature. Current clinical methods may be inaccurate or [...] Read more.
Chronic pain leads to not only physical discomfort but also psychological challenges, such as depression and anxiety, which contribute to a substantial healthcare burden. Pain detection and assessment remains a challenge due to its subjective nature. Current clinical methods may be inaccurate or unfeasible for non-verbal patients. Consequently, Electroencephalography (EEG) has emerged as a promising non-invasive tool for pain detection. However, EEG-based pain detection faces challenges such as noise, volume conduction effects, and high inter-subject variability. Deep learning (DL) models have shown potential in overcoming these challenges by extracting nonlinear and discriminative patterns. Despite advancements, these models often require a subject-dependent approach and lack of interpretability. To address these limitations, we propose a threefold DL-based framework for coding EEG-based pain detection patterns. (i) We employ the Kernel Cross-Spectral Gaussian Functional Connectivity Network (KCS-FCnet) to code pairwise channel dependencies for pain detection. (ii) Furthermore, we introduce a frequency-based strategy for class activation mapping to visualize pertinent pain EEG features, thereby enhancing visual interpretability through spatio-frequency patterns. (iii) Further, to account for subject variability, we conduct cross-subject analysis and grouping, clustering individuals based on similar pain detection performance, functional connectivity patterns, sex, and age. We evaluate our model using the Brain Mediators of Pain dataset and demonstrate its robustness through subject-dependent and cross-subject generalization tasks for pain detection on non-verbal patients. Full article
(This article belongs to the Special Issue EEG Recognition and Biomedical Signal Processing)
Show Figures

Figure 1

16 pages, 3987 KB  
Article
Coupling Up: A Dynamic Investigation of Romantic Partners’ Neurobiological States During Nonverbal Connection
by Cailee M. Nelson, Christian O’Reilly, Mengya Xia and Caitlin M. Hudac
Behav. Sci. 2024, 14(12), 1133; https://doi.org/10.3390/bs14121133 - 26 Nov 2024
Cited by 4 | Viewed by 6362
Abstract
Nonverbal connection is an important aspect of everyday communication. For romantic partners, nonverbal connection is essential for establishing and maintaining feelings of closeness. EEG hyperscanning offers a unique opportunity to examine the link between nonverbal connection and neural synchrony among romantic partners. This [...] Read more.
Nonverbal connection is an important aspect of everyday communication. For romantic partners, nonverbal connection is essential for establishing and maintaining feelings of closeness. EEG hyperscanning offers a unique opportunity to examine the link between nonverbal connection and neural synchrony among romantic partners. This current study used an EEG hyperscanning paradigm to collect frontal alpha asymmetry (FAA) signatures from 30 participants (15 romantic dyads) engaged in five different types of nonverbal connection that varied based on physical touch and visual contact. The results suggest that there was a lack of FAA while romantic partners were embracing and positive FAA (i.e., indicating approach) while they were holding hands, looking at each other, or doing both. Additionally, partners’ FAA synchrony was greatest at a four second lag while they were holding hands and looking at each other. Finally, there was a significant association between partners’ weekly negative feelings and FAA such that as they felt more negative their FAA became more positive. Taken together, this study further supports the idea that fleeting moments of interpersonal touch and gaze are important for the biological mechanisms that may underlie affiliative pair bonding in romantic relationships. Full article
Show Figures

Figure 1

15 pages, 232 KB  
Article
The Experiences and Views of Employees on Hybrid Ways of Working
by Anastasia Hanzis and Leonie Hallo
Adm. Sci. 2024, 14(10), 263; https://doi.org/10.3390/admsci14100263 - 17 Oct 2024
Cited by 7 | Viewed by 19827
Abstract
The contemporary post-COVID-19 corporate environment of instant response and hybrid work settings motivates employees to learn to adjust their expectations. This new corporate working model incorporates flex locations and flex schedules by working at home 1–2 days per week and staying connected for [...] Read more.
The contemporary post-COVID-19 corporate environment of instant response and hybrid work settings motivates employees to learn to adjust their expectations. This new corporate working model incorporates flex locations and flex schedules by working at home 1–2 days per week and staying connected for non-urgent requests, even outside business hours. This work setting empowers employees to prioritize work accordingly and to accommodate the fluid schedules of their coworkers. As a result, this new hybrid workplace requires leaders and their teams to face new challenges in terms of communication, coordination, and team connection to remain effective. This research examines the experiences of employees in an SME that applied a hybrid work policy following the post-pandemic crisis, bringing additional complexity to their modern work system. This study investigates employees’ views on the changing work environment as important evidence for HR management to incorporate into future organizational practices. To understand the various principles at play and provide more granular results, this paper includes a business case study (N = 25) where semi-structured interviews were used to identify the views and concerns of employees regarding hybrid work settings. The scope of this case study was to collect empirical data regarding this new agile way of working while understanding participant thinking. The findings suggest that while there are clear benefits in terms of efficiency and flexibility in hybrid work settings, there are also challenges related to social interactions and non-verbal clues. This study enhances conceptual and empirical understanding and supports contemporary research on the future of work. Full article
16 pages, 1004 KB  
Article
Beyond Words: Tapping the Potential of Digital Diaries While Exploring Young Adults’ Experiences on Apps
by Rita Alcaire, Ana Marta M. Flores and Eduardo Antunes
Societies 2024, 14(3), 40; https://doi.org/10.3390/soc14030040 - 14 Mar 2024
Cited by 3 | Viewed by 4797
Abstract
In the dynamic landscape of online interactions, this article explores the use of digital diaries to unravel the intricacy of Portugal young adults’ experiences within the realm of apps and their connection to gender dynamics. By designing a digital participatory research method, we [...] Read more.
In the dynamic landscape of online interactions, this article explores the use of digital diaries to unravel the intricacy of Portugal young adults’ experiences within the realm of apps and their connection to gender dynamics. By designing a digital participatory research method, we were able to reflect on the participants’ experiences in maintaining the requested diaries, scrutinize the major themes in the narratives generated through this approach, and examine how participants interacted with the prompts sent to them. Therefore, we delved into how participants both challenged and (re)negotiated these solicitations and how their agency led to an untapped reservoir of insights for the project in ways that went beyond words. There were visual and non-verbal elements that brought insights into young adults’ interactions with mobile applications, offering a comprehensive exploration of four key themes: mobile apps as part of young adults’ routines, between performance and authenticity, making the diaries their own, and elaborating on feelings. We also explored diary methods at the convergence of various disciplines and their high potential for contributing to topics related to gender, mental health, productivity, relationships, online identity management, apps in everyday life, intimacy, and more in creative ways. Full article
(This article belongs to the Special Issue Visual Arts and Design: Practice-Based Research)
Show Figures

Figure 1

12 pages, 537 KB  
Article
Personalizing Communication of Clinicians with Chronically Ill Elders in Digital Encounters—A Patient-Centered View
by Gillie Gabay, Hana Ornoy, Attila Gere and Howard Moskowitz
Healthcare 2024, 12(4), 434; https://doi.org/10.3390/healthcare12040434 - 8 Feb 2024
Cited by 2 | Viewed by 2284
Abstract
Background: Chronically ill elderly patients are concerned about losing the personal connection with clinicians in digital encounters and clinicians are concerned about missing nonverbal cues that are important for the diagnosis, thus jeopardizing quality of care. Aims: This study validated the expectations and [...] Read more.
Background: Chronically ill elderly patients are concerned about losing the personal connection with clinicians in digital encounters and clinicians are concerned about missing nonverbal cues that are important for the diagnosis, thus jeopardizing quality of care. Aims: This study validated the expectations and preferences of chronically ill elderly patients regarding specific communication messages for communication with clinicians in telemedicine. Methods: The sample comprised 600 elderly chronically ill patients who use telehealth. We used a conjoint-based experimental design to test numerous messages. The outcome variable is elder patient expectations from communication with clinicians in telemedicine. The independent variables were known categories of patient–clinician communication. Respondents rated each of the 24 vignettes of messages. Results: Mathematical clustering yielded three mindsets, with statistically significant differences among them. Members of mindset 1 were most concerned with non-verbal communication, members of mindset 2 prefer communication that enhances the internal locus of control, and members of mindset 3 have an external locus of control and strongly oppose any dialogue about their expectations from communication. Conclusions: The use of the predictive algorithm that we developed enables clinicians to identify the belonging of each chronically ill elderly patient in the clinic to a sample mindset, and to accordingly personalize the communication in the digital encounters while structuring the encounter with greater specificity, therefore enhancing patient-centered care. Full article
Show Figures

Figure 1

24 pages, 2837 KB  
Review
Social Brain Perspectives on the Social and Evolutionary Neuroscience of Human Language
by Nathan Oesch
Brain Sci. 2024, 14(2), 166; https://doi.org/10.3390/brainsci14020166 - 7 Feb 2024
Cited by 11 | Viewed by 10013 | Correction
Abstract
Human language and social cognition are two key disciplines that have traditionally been studied as separate domains. Nonetheless, an emerging view suggests an alternative perspective. Drawing on the theoretical underpinnings of the social brain hypothesis (thesis of the evolution of brain size and [...] Read more.
Human language and social cognition are two key disciplines that have traditionally been studied as separate domains. Nonetheless, an emerging view suggests an alternative perspective. Drawing on the theoretical underpinnings of the social brain hypothesis (thesis of the evolution of brain size and intelligence), the social complexity hypothesis (thesis of the evolution of communication), and empirical research from comparative animal behavior, human social behavior, language acquisition in children, social cognitive neuroscience, and the cognitive neuroscience of language, it is argued that social cognition and language are two significantly interconnected capacities of the human species. Here, evidence in support of this view reviews (1) recent developmental studies on language learning in infants and young children, pointing to the important crucial benefits associated with social stimulation for youngsters, including the quality and quantity of incoming linguistic information, dyadic infant/child-to-parent non-verbal and verbal interactions, and other important social cues integral for facilitating language learning and social bonding; (2) studies of the adult human brain, suggesting a high degree of specialization for sociolinguistic information processing, memory retrieval, and comprehension, suggesting that the function of these neural areas may connect social cognition with language and social bonding; (3) developmental deficits in language and social cognition, including autism spectrum disorder (ASD), illustrating a unique developmental profile, further linking language, social cognition, and social bonding; and (4) neural biomarkers that may help to identify early developmental disorders of language and social cognition. In effect, the social brain and social complexity hypotheses may jointly help to describe how neurotypical children and adults acquire language, why autistic children and adults exhibit simultaneous deficits in language and social cognition, and why nonhuman primates and other organisms with significant computational capacities cannot learn language. But perhaps most critically, the following article argues that this and related research will allow scientists to generate a holistic profile and deeper understanding of the healthy adult social brain while developing more innovative and effective diagnoses, prognoses, and treatments for maladies and deficits also associated with the social brain. Full article
(This article belongs to the Special Issue Neurodevelopmental Disorders and Early Language Acquisition)
Show Figures

Figure 1

13 pages, 5061 KB  
Article
Research on the Pathogenesis of Cognitive and Neurofunctional Impairments in Patients with Noonan Syndrome: The Role of Rat Sarcoma–Mitogen Activated Protein Kinase Signaling Pathway Gene Disturbances
by Natalia Braun-Walicka, Agnieszka Pluta, Tomasz Wolak, Edyta Maj, Agnieszka Maryniak, Monika Gos, Anna Abramowicz, Aleksandra Landowska, Ewa Obersztyn and Jerzy Bal
Genes 2023, 14(12), 2173; https://doi.org/10.3390/genes14122173 - 3 Dec 2023
Cited by 5 | Viewed by 2595
Abstract
Noonan syndrome (NS) is one of the most common genetic conditions inherited mostly in an autosomal dominant manner with vast heterogeneity in clinical and genetic features. Patients with NS might have speech disturbances, memory and attention deficits, limitations in daily functioning, and decreased [...] Read more.
Noonan syndrome (NS) is one of the most common genetic conditions inherited mostly in an autosomal dominant manner with vast heterogeneity in clinical and genetic features. Patients with NS might have speech disturbances, memory and attention deficits, limitations in daily functioning, and decreased overall intelligence. Here, 34 patients with Noonan syndrome and 23 healthy controls were enrolled in a study involving gray and white matter volume evaluation using voxel-based morphometry (VBM), white matter connectivity measurements using diffusion tensor imaging (DTI), and resting-state functional magnetic resonance imaging (rs-fMRI). Fractional anisotropy (FA) and mean diffusivity (MD) probability distributions were calculated. Cognitive abilities were assessed using the Stanford Binet Intelligence Scales. Reductions in white matter connectivity were detected using DTI in NS patients. The rs-fMRI revealed hyper-connectivity in NS patients between the sensorimotor network and language network and between the sensorimotor network and salience network in comparison to healthy controls. NS patients exhibited decreased verbal and nonverbal IQ compared to healthy controls. The assessment of the microstructural alterations of white matter as well as the resting-state functional connectivity (rsFC) analysis in patients with NS may shed light on the mechanisms responsible for cognitive and neurofunctional impairments. Full article
(This article belongs to the Special Issue Genetics and Genomics of Heritable Pediatric Disorders)
Show Figures

Figure 1

28 pages, 582 KB  
Systematic Review
Cochlear Implantation in Children with Additional Disabilities: A Systematic Review
by Valeria Caragli, Daniele Monzani, Elisabetta Genovese, Silvia Palma and Antonio M. Persico
Children 2023, 10(10), 1653; https://doi.org/10.3390/children10101653 - 5 Oct 2023
Cited by 12 | Viewed by 6042
Abstract
This study examines the last 10 years of medical literature on the benefits of cochlear implantation in children who are deaf or hard of hearing (DHH) with additional disabilities. The most recent literature concerning cochlear implants (CIs) in DHH children with additional disabilities [...] Read more.
This study examines the last 10 years of medical literature on the benefits of cochlear implantation in children who are deaf or hard of hearing (DHH) with additional disabilities. The most recent literature concerning cochlear implants (CIs) in DHH children with additional disabilities was systematically explored through PubMed, Embase, Scopus, PsycINFO, and Web of Science from January 2012 to July 2023. Our two-stage search strategy selected a total of 61 articles concerning CI implantation in children with several forms of additional disabilities: autism spectrum disorder, cerebral palsy, visual impairment, motor disorders, developmental delay, genetic syndromes, and intellectual disability. Overall, many children with additional disabilities benefit from CIs by acquiring greater environmental sound awareness. This, in turn, improves non-verbal communication and adaptive skills, with greater possibilities to relate to others and to be connected with the environment. Instead, despite some improvement, expressive language tends to develop more slowly and to a lesser extent compared to children affected by hearing loss only. Further studies are needed to better appreciate the specificities of each single disability and to personalize interventions, not restricting the analysis to auditory and language skills, but rather applying or developing cross-culturally validated instruments able to reliably assess the developmental trajectory and the quality of life of DHH children with additional disabilities before and after CI. Full article
(This article belongs to the Section Pediatric Neurology & Neurodevelopmental Disorders)
Show Figures

Figure 1

Back to TopTop