sensors-logo

Journal Browser

Journal Browser

Sensors for Behavioral Science—Social, Affective, and Cognitive Science Perspectives

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 October 2020) | Viewed by 27195

Special Issue Editors


E-Mail Website
Guest Editor
Department of Psychological and Brain Sciences, Texas A&M University, College Station, TX 77845, USA
Interests: cognitive science; cognitive neuroscience; psychopathology; affective computing; neuroeconomics; cognitive computational neuroscience; computational psychiatry
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science & Engineering, Texas A&M University
Interests: machine learning; signal processing; affective computing; ambulatory monitoring; behavioral signal processing; speech; physiology

Special Issue Information

Dear Colleagues,

Sensor technologies have changed the landscape of behavioral science. Traditional measures of behavior, response time, and accuracy have been supplemented with genetic, physiological, and activity-based indices taken from high-end imaging equipment and off-the-shelf wearable devices. This Special Issue focuses on research, development, and applications of sensors as analytical tools for social, affective, and cognitive science, and cognitive neuroscience. The use of sensors in behavioral science is diverse, ranging from medical-level brain sensors (e.g., fMRI, PET, EEG, fNIR, EDA) to off-the-shelf consumer-grade wearables (Kinect, MUSE, wearables, smartphones, tablets, smart speakers, VR, AR, pupillometry, and eye trackers); it includes creative applications of IoT devices (e.g., Arduino, Raspberry Pi) in the arena of ubiquitous and ambient computing. This Special Issue aims to organize these far-reaching applications of sensors in behavioral science across a potential range of coherent themes, including (1) experimental design and data acquisition; (2) hypothesis-driven research melded with sensor devices; (3) data cleaning and processing; and (4) sensor data analysis.

The topical areas include psychopathology, neuroeconomics, decision making, biofeedback, cognitive control, emotion regulation, interpersonal communication, personality, temperament, cognitive computational neuroscience, computational psychiatry, mental health intervention, teaching, and learning. Innovative and creative use of sensors, theory-driven data analysis, data fusion techniques, Bayesian cognitive modeling, machine learning, probabilistic programming, neural network, and reinforcement learning are also welcome. The main motive is to apply sensor technologies to advance our understanding of human behavior as manifested in emotion, social interaction, cognition, and mental health.

We would like to invite you to participate by submitting original research papers, review articles, short commentaries, theoretical inquiries, and/or tutorials about the use of sensors in human behavior understanding.

Prof. Dr. Takashi Yamauchi
Prof. Dr. Theodora Chaspari
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cognitive science
  • cognitive neuroscience
  • wearables
  • IoT
  • EEG
  • neuroimaging
  • EDA
  • peripheral physiology
  • ambulatory monitoring
  • data quality concerns and mitigation

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

26 pages, 5068 KiB  
Article
The Effect of Co-Verbal Remote Touch on Electrodermal Activity and Emotional Response in Dyadic Discourse
by Angela Chan, Francis Quek, Haard Panchal, Joshua Howell, Takashi Yamauchi and Jinsil Hwaryoung Seo
Sensors 2021, 21(1), 168; https://doi.org/10.3390/s21010168 - 29 Dec 2020
Cited by 6 | Viewed by 3478
Abstract
This article explores the affective impact of remote touch when used in conjunction with video telecon. Committed couples were recruited to engage in semi-structured discussions after they watched a video clip that contained emotionally charged moments. They used paired touch input and output [...] Read more.
This article explores the affective impact of remote touch when used in conjunction with video telecon. Committed couples were recruited to engage in semi-structured discussions after they watched a video clip that contained emotionally charged moments. They used paired touch input and output devices to send upper-arm squeezes to each other in real-time. Users were not told how to use the devices and were free to define the purpose of their use. We examined how remote touch was used and its impact on skin conductance and affective response. We observed 65 different touch intents, which were classified into broader categories. We employed a series of analyses within a framework of behavioral and experiential timescales. Our findings revealed that remote touches created a change in the overall psychological affective experience and skin conductance response. Only remote touches that were judged to be affective elicited significant changes in EDA measurements. Our study demonstrates the affective power of remote touch in video telecommunication, and that off-the-shelf wearable EDA sensing devices can detect such affective impacts. Our findings pave the way for new species of technologies with real-time feedback support for a range of communicative and special needs such as isolation, stress, and anxiety. Full article
Show Figures

Figure 1

20 pages, 5101 KiB  
Article
The Role of Features Types and Personalized Assessment in Detecting Affective State Using Dry Electrode EEG
by Paruthi Pradhapan, Emmanuel Rios Velazquez, Jolanda A. Witteveen, Yelena Tonoyan and Vojkan Mihajlović
Sensors 2020, 20(23), 6810; https://doi.org/10.3390/s20236810 - 28 Nov 2020
Cited by 9 | Viewed by 2501
Abstract
Assessing the human affective state using electroencephalography (EEG) have shown good potential but failed to demonstrate reliable performance in real-life applications. Especially if one applies a setup that might impact affective processing and relies on generalized models of affect. Additionally, using subjective assessment [...] Read more.
Assessing the human affective state using electroencephalography (EEG) have shown good potential but failed to demonstrate reliable performance in real-life applications. Especially if one applies a setup that might impact affective processing and relies on generalized models of affect. Additionally, using subjective assessment of ones affect as ground truth has often been disputed. To shed the light on the former challenge we explored the use of a convenient EEG system with 20 participants to capture their reaction to affective movie clips in a naturalistic setting. Employing state-of-the-art machine learning approach demonstrated that the highest performance is reached when combining linear features, namely symmetry features and single-channel features, with nonlinear ones derived by a multiscale entropy approach. Nevertheless, the best performance, reflected in the highest F1-score achieved in a binary classification task for valence was 0.71 and for arousal 0.62. The performance was 10–20% better compared to using ratings provided by 13 independent raters. We argue that affective self-assessment might be underrated and it is crucial to account for personal differences in both perception and physiological response to affective cues. Full article
Show Figures

Figure 1

24 pages, 2756 KiB  
Article
A Usability Study of Physiological Measurement in School Using Wearable Sensors
by Nattapong Thammasan, Ivo V. Stuldreher, Elisabeth Schreuders, Matteo Giletta and Anne-Marie Brouwer
Sensors 2020, 20(18), 5380; https://doi.org/10.3390/s20185380 - 20 Sep 2020
Cited by 22 | Viewed by 4844
Abstract
Measuring psychophysiological signals of adolescents using unobtrusive wearable sensors may contribute to understanding the development of emotional disorders. This study investigated the feasibility of measuring high quality physiological data and examined the validity of signal processing in a school setting. Among 86 adolescents, [...] Read more.
Measuring psychophysiological signals of adolescents using unobtrusive wearable sensors may contribute to understanding the development of emotional disorders. This study investigated the feasibility of measuring high quality physiological data and examined the validity of signal processing in a school setting. Among 86 adolescents, a total of more than 410 h of electrodermal activity (EDA) data were recorded using a wrist-worn sensor with gelled electrodes and over 370 h of heart rate data were recorded using a chest-strap sensor. The results support the feasibility of monitoring physiological signals at school. We describe specific challenges and provide recommendations for signal analysis, including dealing with invalid signals due to loose sensors, and quantization noise that can be caused by limitations in analog-to-digital conversion in wearable devices and be mistaken as physiological responses. Importantly, our results show that using toolboxes for automatic signal preprocessing, decomposition, and artifact detection with default parameters while neglecting differences between devices and measurement contexts yield misleading results. Time courses of students’ physiological signals throughout the course of a class were found to be clearer after applying our proposed preprocessing steps. Full article
Show Figures

Figure 1

15 pages, 1467 KiB  
Article
Which Visual Modality Is Important When Judging the Naturalness of the Agent (Artificial Versus Human Intelligence) Providing Recommendations in the Symbolic Consumption Context?
by Kyungmi Chung, Jin Young Park, Kiwan Park and Yaeri Kim
Sensors 2020, 20(17), 5016; https://doi.org/10.3390/s20175016 - 3 Sep 2020
Cited by 3 | Viewed by 3251
Abstract
This study aimed to explore how the type and visual modality of a recommendation agent’s identity affect male university students’ (1) self-reported responses to agent-recommended symbolic brand in evaluating the naturalness of virtual agents, human, or artificial intelligence (AI) and (2) early event-related [...] Read more.
This study aimed to explore how the type and visual modality of a recommendation agent’s identity affect male university students’ (1) self-reported responses to agent-recommended symbolic brand in evaluating the naturalness of virtual agents, human, or artificial intelligence (AI) and (2) early event-related potential (ERP) responses between text- and face-specific scalp locations. Twenty-seven participants (M = 25.26, SD = 5.35) whose consumption was more motivated by symbolic needs (vs. functional) were instructed to perform a visual task to evaluate the naturalness of the target stimuli. As hypothesized, the subjective evaluation showed that they had lower attitudes and perceived higher unnaturalness when the symbolic brand was recommended by AI (vs. human). Based on this self-report, two epochs were segmented for the ERP analysis: human-natural and AI-unnatural. As revealed by P100 amplitude modulation on visual modality of two agents, their evaluation relied more on face image rather than text. Furthermore, this tendency was consistently observed in that of N170 amplitude when the agent identity was defined as human. However, when the agent identity was defined as AI, reversed N170 modulation was observed, indicating that participants referred more to textual information than graphical information to assess the naturalness of the agent. Full article
Show Figures

Figure 1

13 pages, 1589 KiB  
Article
Speech Discrimination in Real-World Group Communication Using Audio-Motion Multimodal Sensing
by Takayuki Nozawa, Mizuki Uchiyama, Keigo Honda, Tamio Nakano and Yoshihiro Miyake
Sensors 2020, 20(10), 2948; https://doi.org/10.3390/s20102948 - 22 May 2020
Cited by 1 | Viewed by 4673
Abstract
Speech discrimination that determines whether a participant is speaking at a given moment is essential in investigating human verbal communication. Specifically, in dynamic real-world situations where multiple people participate in, and form, groups in the same space, simultaneous speakers render speech discrimination that [...] Read more.
Speech discrimination that determines whether a participant is speaking at a given moment is essential in investigating human verbal communication. Specifically, in dynamic real-world situations where multiple people participate in, and form, groups in the same space, simultaneous speakers render speech discrimination that is solely based on audio sensing difficult. In this study, we focused on physical activity during speech, and hypothesized that combining audio and physical motion data acquired by wearable sensors can improve speech discrimination. Thus, utterance and physical activity data of students in a university participatory class were recorded, using smartphones worn around their neck. First, we tested the temporal relationship between manually identified utterances and physical motions and confirmed that physical activities in wide-frequency ranges co-occurred with utterances. Second, we trained and tested classifiers for each participant and found a higher performance with the audio-motion classifier (average accuracy 92.2%) than both the audio-only (80.4%) and motion-only (87.8%) classifiers. Finally, we tested inter-individual classification and obtained a higher performance with the audio-motion combined classifier (83.2%) than the audio-only (67.7%) and motion-only (71.9%) classifiers. These results show that audio-motion multimodal sensing using widely available smartphones can provide effective utterance discrimination in dynamic group communications. Full article
Show Figures

Figure 1

21 pages, 2811 KiB  
Article
Impact of Think-Aloud on Eye-Tracking: A Comparison of Concurrent and Retrospective Think-Aloud for Research on Decision-Making in the Game Environment
by Michal Prokop, Ladislav Pilař and Ivana Tichá
Sensors 2020, 20(10), 2750; https://doi.org/10.3390/s20102750 - 12 May 2020
Cited by 20 | Viewed by 4024
Abstract
Simulations and games bring the possibility to research complex processes of managerial decision-making. However, this modern field requires adequate methodological procedures. Many authors recommend the use of a combination of concurrent think-aloud (CTA) or retrospective think-aloud (RTA) with eye-tracking to investigate cognitive processes [...] Read more.
Simulations and games bring the possibility to research complex processes of managerial decision-making. However, this modern field requires adequate methodological procedures. Many authors recommend the use of a combination of concurrent think-aloud (CTA) or retrospective think-aloud (RTA) with eye-tracking to investigate cognitive processes such as decision-making. Nevertheless, previous studies have little or no consideration of the possible differential impact of both think-aloud methods on data provided by eye-tracking. Therefore, the main aim of this study is to compare and assess if and how these methods differ in terms of their impact on eye-tracking. The experiment was conducted for this purpose. Participants were 14 managers who played a specific simulation game with CTA use and 17 managers who played the same game with RTA use. The results empirically prove that CTA significantly distorts data provided by eye-tracking, whereas data gathered when RTA is used, provide independent pieces of evidence about the participants’ behavior. These findings suggest that RTA is more suitable for combined use with eye-tracking for the purpose of the research of decision-making in the game environment. Full article
Show Figures

Graphical abstract

Other

Jump to: Research

13 pages, 1763 KiB  
Perspective
A Novel Mixed Methods Approach to Synthesize EDA Data with Behavioral Data to Gain Educational Insight
by Clodagh Reid, Conor Keighrey, Niall Murray, Rónán Dunbar and Jeffrey Buckley
Sensors 2020, 20(23), 6857; https://doi.org/10.3390/s20236857 - 30 Nov 2020
Cited by 2 | Viewed by 3475
Abstract
Whilst investigating student performance in design and arithmetic tasks, as well as during exams, electrodermal activity (EDA)-based sensors have been used in attempts to understand cognitive function and cognitive load. Limitations in the employed approaches include lack of capacity to mark events in [...] Read more.
Whilst investigating student performance in design and arithmetic tasks, as well as during exams, electrodermal activity (EDA)-based sensors have been used in attempts to understand cognitive function and cognitive load. Limitations in the employed approaches include lack of capacity to mark events in the data, and to explain other variables relating to performance outcomes. This paper aims to address these limitations, and to support the utility of wearable EDA sensor technology in educational research settings. These aims are achieved through use of a bespoke time mapping software which identifies key events during task performance and by taking a novel approach to synthesizing EDA data from a qualitative behavioral perspective. A convergent mixed method design is presented whereby the associated implementation follows a two-phase approach. The first phase involves the collection of the required EDA and behavioral data. Phase two outlines a mixed method analysis with two approaches of synthesizing the EDA data with behavioral analyses. There is an optional third phase, which would involve the sequential collection of any additional data to support contextualizing or interpreting the EDA and behavioral data. The inclusion of this phase would turn the method into a complex sequential mixed method design. Through application of the convergent or complex sequential mixed method, valuable insight can be gained into the complexities of individual learning experiences and support clearer inferences being made on the factors relating to performance. These inferences can be used to inform task design and contribute to the improvement of the teaching and learning experience. Full article
Show Figures

Figure 1

Back to TopTop