sensors-logo

Journal Browser

Journal Browser

Human-Computer Interaction in Smart Environments

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 58832

Special Issue Editors


E-Mail Website
Guest Editor
Computer Science Department, Ariel University, Ramat HaGolan St, Ariel, Israel
Interests: human–agent interaction; virtual assistants

E-Mail Website
Guest Editor
Lev Academic Center, Havaad Haleumi 21, Givat Mordechai 91160, Jerusalem, Israel
Interests: human–agent interaction; data mining for health applications

Special Issue Information

Dear Colleagues,

The ubiquitous presence of computers and other sensing technology in our daily lives emphasizes the need for smart methods of interaction with human users.

Computer sensing may be processed by different devices, such as smartphones, smart speakers, smart vehicles and other smart devices.

Human interaction requires handling various types of human input, such as voice, text, gestures, motion, typing, as well as environmental sensing, such as temperature, lighting and noise.

The human–computer interaction requires defining human interfaces, the understanding of humans, their goals and preferences, as well as the development of methods for sensor data analysis.

Smart environments are composed of sensors and actuators in an instrumented space, generally used for interacting with a human user.

Dr. Amos Azaria
Dr. Ariella Richardson
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–computer interaction
  • human–robot interaction
  • human–agent interaction
  • humans in smart environments
  • humans in smart homes
  • intelligent assistants
  • sensors for human interaction

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

11 pages, 719 KiB  
Article
Patients’ Self-Report and Handwriting Performance Features as Indicators for Suspected Mild Cognitive Impairment in Parkinson’s Disease
by Sara Rosenblum, Sonya Meyer, Ariella Richardson and Sharon Hassin-Baer
Sensors 2022, 22(2), 569; https://doi.org/10.3390/s22020569 - 12 Jan 2022
Cited by 4 | Viewed by 2602
Abstract
Early identification of mild cognitive impairment (MCI) in Parkinson’s disease (PD) patients can lessen emotional and physical complications. In this study, a cognitive functional (CF) feature using cognitive and daily living items of the Unified Parkinson’s Disease Rating Scale served to define PD [...] Read more.
Early identification of mild cognitive impairment (MCI) in Parkinson’s disease (PD) patients can lessen emotional and physical complications. In this study, a cognitive functional (CF) feature using cognitive and daily living items of the Unified Parkinson’s Disease Rating Scale served to define PD patients as suspected or not for MCI. The study aimed to compare objective handwriting performance measures with the perceived general functional abilities (PGF) of both groups, analyze correlations between handwriting performance measures and PGF for each group, and find out whether participants’ general functional abilities, depression levels, and digitized handwriting measures predicted this CF feature. Seventy-eight participants diagnosed with PD by a neurologist (25 suspected for MCI based on the CF feature) completed the PGF as part of the Daily Living Questionnaire and wrote on a digitizer-affixed paper in the Computerized Penmanship Handwriting Evaluation Test. Results indicated significant group differences in PGF scores and handwriting stroke width, and significant medium correlations between PGF score, pen-stroke width, and the CF feature. Regression analyses indicated that PGF scores and mean stroke width accounted for 28% of the CF feature variance above age. Nuances of perceived daily functional abilities validated by objective measures may contribute to the early identification of suspected PD-MCI. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

11 pages, 632 KiB  
Article
An Electrophysiological Model for Assessing Cognitive Load in Tacit Coordination Games
by Ilan Laufer, Dor Mizrahi and Inon Zuckerman
Sensors 2022, 22(2), 477; https://doi.org/10.3390/s22020477 - 9 Jan 2022
Cited by 13 | Viewed by 2098
Abstract
Previously, it was shown that some people are better coordinators than others; however, the relative weight of intuitive (system 1) versus deliberate (system 2) modes of thinking in tacit coordination tasks is still not resolved. To address this question, we have extracted an [...] Read more.
Previously, it was shown that some people are better coordinators than others; however, the relative weight of intuitive (system 1) versus deliberate (system 2) modes of thinking in tacit coordination tasks is still not resolved. To address this question, we have extracted an electrophysiological index, the theta-beta ratio (TBR), from the Electroencephalography (EEG) recorded from participants while they were engaged in a semantic coordination task. Results have shown that individual coordination ability, game difficulty and response time are each positively correlated with cognitive load. These results suggest that better coordinators rely more on complex thought process and on more deliberate thinking while coordinating. The model we have presented may be used for the assessment of the depth of reasoning individuals engage in when facing different tasks requiring different degrees of allocation of resources. The findings as well as future research directions are discussed. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

22 pages, 2670 KiB  
Article
Advancing Smart Home Awareness—A Conceptual Computational Modelling Framework for the Execution of Daily Activities of People with Alzheimer’s Disease
by Nikolaos Liappas, José Gabriel Teriús-Padrón, Rebeca Isabel García-Betances and María Fernanda Cabrera-Umpiérrez
Sensors 2022, 22(1), 166; https://doi.org/10.3390/s22010166 - 27 Dec 2021
Cited by 3 | Viewed by 3086
Abstract
Utilizing context-aware tools in smart homes (SH) helps to incorporate higher quality interaction paradigms between the house and specific groups of users such as people with Alzheimer’s disease (AD). One method of delivering these interaction paradigms acceptably and efficiently is through context processing [...] Read more.
Utilizing context-aware tools in smart homes (SH) helps to incorporate higher quality interaction paradigms between the house and specific groups of users such as people with Alzheimer’s disease (AD). One method of delivering these interaction paradigms acceptably and efficiently is through context processing the behavior of the residents within the SH. Predicting human behavior and uncertain events is crucial in the prevention of upcoming missteps and confusion when people with AD perform their daily activities. Modelling human behavior and mental states using cognitive architectures produces computational models capable of replicating real use case scenarios. In this way, SHs can reinforce the execution of daily activities effectively once they acquire adequate awareness about the missteps, interruptions, memory problems, and unpredictable events that can arise during the daily life of a person living with cognitive deterioration. This paper presents a conceptual computational framework for the modelling of daily living activities of people with AD and their progression through different stages of AD. Simulations and initial results demonstrate that it is feasible to effectively estimate and predict common errors and behaviors in the execution of daily activities under specific assessment tests. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

14 pages, 583 KiB  
Article
A Safe Collaborative Chatbot for Smart Home Assistants
by Merav Chkroun and Amos Azaria
Sensors 2021, 21(19), 6641; https://doi.org/10.3390/s21196641 - 6 Oct 2021
Cited by 4 | Viewed by 2909
Abstract
Smart home assistants, which enable users to control home appliances and can be used for holding entertaining conversations, have become an inseparable part of many people’s homes. Recently, there have been many attempts to allow end-users to teach a home assistant new commands, [...] Read more.
Smart home assistants, which enable users to control home appliances and can be used for holding entertaining conversations, have become an inseparable part of many people’s homes. Recently, there have been many attempts to allow end-users to teach a home assistant new commands, responses, and rules, which can then be shared with a larger community. However, allowing end-users to teach an agent new responses, which are shared with a large community, opens the gate to malicious users, who can teach the agent inappropriate responses in order to promote their own business, products, or political views. In this paper, we present a platform that enables users to collaboratively teach a smart home assistant (or chatbot) responses using natural language. We present a method of collectively detecting malicious users and using the commands taught by the malicious users to further mitigate activity of future malicious users. We ran an experiment with 192 subjects and show the effectiveness of our platform. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

11 pages, 301 KiB  
Communication
Privacy–Accuracy Consideration in Devices That Collect Sensor-Based Information
by Lihi Dery and Artyom Jelnov
Sensors 2021, 21(14), 4684; https://doi.org/10.3390/s21144684 - 9 Jul 2021
Cited by 4 | Viewed by 2510
Abstract
Accurately tailored support such as advice or assistance can increase user satisfaction from interactions with smart devices; however, in order to achieve high accuracy, the device must obtain and exploit private user data and thus confidential user information might be jeopardized. We provide [...] Read more.
Accurately tailored support such as advice or assistance can increase user satisfaction from interactions with smart devices; however, in order to achieve high accuracy, the device must obtain and exploit private user data and thus confidential user information might be jeopardized. We provide an analysis of this privacy–accuracy trade-off. We assume two positive correlations: a user’s utility from a device is positively correlated with the user’s privacy risk and also with the quality of the advice or assistance offered by the device. The extent of the privacy risk is unknown to the user. Thus, privacy concerned users might choose not to interact with devices they deem as unsafe. We suggest that at the first period of usage, the device should choose not to employ the full capability of its advice or assistance capabilities, since this may intimidate users from adopting it. Using three analytical propositions, we further offer an optimal policy for smart device exploitation of private data for the purpose of interactions with users. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

20 pages, 505 KiB  
Article
Dynamic Acoustic Unit Augmentation with BPE-Dropout for Low-Resource End-to-End Speech Recognition
by Aleksandr Laptev, Andrei Andrusenko, Ivan Podluzhny, Anton Mitrofanov, Ivan Medennikov and Yuri Matveev
Sensors 2021, 21(9), 3063; https://doi.org/10.3390/s21093063 - 28 Apr 2021
Cited by 11 | Viewed by 3246
Abstract
With the rapid development of speech assistants, adapting server-intended automatic speech recognition (ASR) solutions to a direct device has become crucial. For on-device speech recognition tasks, researchers and industry prefer end-to-end ASR systems as they can be made resource-efficient while maintaining a higher [...] Read more.
With the rapid development of speech assistants, adapting server-intended automatic speech recognition (ASR) solutions to a direct device has become crucial. For on-device speech recognition tasks, researchers and industry prefer end-to-end ASR systems as they can be made resource-efficient while maintaining a higher quality compared to hybrid systems. However, building end-to-end models requires a significant amount of speech data. Personalization, which is mainly handling out-of-vocabulary (OOV) words, is another challenging task associated with speech assistants. In this work, we consider building an effective end-to-end ASR system in low-resource setups with a high OOV rate, embodied in Babel Turkish and Babel Georgian tasks. We propose a method of dynamic acoustic unit augmentation based on the Byte Pair Encoding with dropout (BPE-dropout) technique. The method non-deterministically tokenizes utterances to extend the token’s contexts and to regularize their distribution for the model’s recognition of unseen words. It also reduces the need for optimal subword vocabulary size search. The technique provides a steady improvement in regular and personalized (OOV-oriented) speech recognition tasks (at least 6% relative word error rate (WER) and 25% relative F-score) at no additional computational cost. Owing to the BPE-dropout use, our monolingual Turkish Conformer has achieved a competitive result with 22.2% character error rate (CER) and 38.9% WER, which is close to the best published multilingual system. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

21 pages, 389 KiB  
Article
LBA: Online Learning-Based Assignment of Patients to Medical Professionals
by Hanan Rosemarin, Ariel Rosenfeld, Steven Lapp and Sarit Kraus
Sensors 2021, 21(9), 3021; https://doi.org/10.3390/s21093021 - 25 Apr 2021
Cited by 3 | Viewed by 2510
Abstract
Central to any medical domain is the challenging patient to medical professional assignment task, aimed at getting the right patient to the right medical professional at the right time. This task is highly complex and involves partially conflicting objectives such as minimizing patient [...] Read more.
Central to any medical domain is the challenging patient to medical professional assignment task, aimed at getting the right patient to the right medical professional at the right time. This task is highly complex and involves partially conflicting objectives such as minimizing patient wait-time while providing maximal level of care. To tackle this challenge, medical institutions apply common scheduling heuristics to guide their decisions. These generic heuristics often do not align with the expectations of each specific medical institution. In this article, we propose a novel learning-based online optimization approach we term Learning-Based Assignment (LBA), which provides decision makers with a tailored, data-centered decision support algorithm that facilitates dynamic, institution-specific multi-variate decisions, without altering existing medical workflows. We adapt our generic approach to two medical settings: (1) the assignment of patients to caregivers in an emergency department; and (2) the assignment of medical scans to radiologists. In an extensive empirical evaluation, using real-world data and medical experts’ input from two distinctive medical domains, we show that our proposed approach provides a dynamic, robust and configurable data-driven solution which can significantly improve upon existing medical practices. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

19 pages, 9133 KiB  
Article
Vision–Language–Knowledge Co-Embedding for Visual Commonsense Reasoning
by JaeYun Lee and Incheol Kim
Sensors 2021, 21(9), 2911; https://doi.org/10.3390/s21092911 - 21 Apr 2021
Cited by 6 | Viewed by 4566
Abstract
Visual commonsense reasoning is an intelligent task performed to decide the most appropriate answer to a question while providing the rationale or reason for the answer when an image, a natural language question, and candidate responses are given. For effective visual commonsense reasoning, [...] Read more.
Visual commonsense reasoning is an intelligent task performed to decide the most appropriate answer to a question while providing the rationale or reason for the answer when an image, a natural language question, and candidate responses are given. For effective visual commonsense reasoning, both the knowledge acquisition problem and the multimodal alignment problem need to be solved. Therefore, we propose a novel Vision–Language–Knowledge Co-embedding (ViLaKC) model that extracts knowledge graphs relevant to the question from an external knowledge base, ConceptNet, and uses them together with the input image to answer the question. The proposed model uses a pretrained vision–language–knowledge embedding module, which co-embeds multimodal data including images, natural language texts, and knowledge graphs into a single feature vector. To reflect the structural information of the knowledge graph, the proposed model uses the graph convolutional neural network layer to embed the knowledge graph first and then uses multi-head self-attention layers to co-embed it with the image and natural language question. The effectiveness and performance of the proposed model are experimentally validated using the VCR v1.0 benchmark dataset. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

22 pages, 2935 KiB  
Article
Supervisor-Worker Problems with an Application in Education
by Dorin Shmaryahu, Kobi Gal and Guy Shani
Sensors 2021, 21(6), 1965; https://doi.org/10.3390/s21061965 - 11 Mar 2021
Viewed by 2117
Abstract
In many e-learning settings, allowing students to choose which skills to practice encourages their motivation and contributes to learning. However, when given choice, students may prefer to practice skills that they already master, rather than practice skills they need to master. On the [...] Read more.
In many e-learning settings, allowing students to choose which skills to practice encourages their motivation and contributes to learning. However, when given choice, students may prefer to practice skills that they already master, rather than practice skills they need to master. On the other hand, requiring students only to practice their required skills may reduce their motivation and lead to dropout. In this paper, we model this tradeoff as a multi-agent planning task, which we call SWOPP (Supervisor- Worker Problem with Partially Overlapping goals), involving two agents—a supervisor (teacher) and a worker (student)—each with different, yet non-conflicting, goals. The supervisor and worker share joint goals (mastering skills). The worker plans to achieve his/her own goals (completing an e-learning session) at a minimal cost (effort required to solve problems). The supervisor guides the worker towards achieving the joint goals by controlling the problems in the choice set for the worker. We provide a formal model for the SWOPP task and two sound and complete algorithms for the supervisor to guide the worker’s plan to achieve their joint goals. We deploy SWOPP for the first time in a real-world study to personalize math questions for K5 students using an e-learning software in schools. We show that SWOPP was able to guide students’ interactions with the software to practice necessary skills without deterring their motivation. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

14 pages, 3035 KiB  
Article
DailyCog: A Real-World Functional Cognitive Mobile Application for Evaluating Mild Cognitive Impairment (MCI) in Parkinson’s Disease
by Sara Rosenblum, Ariella Richardson, Sonya Meyer, Tal Nevo, Maayan Sinai and Sharon Hassin-Baer
Sensors 2021, 21(5), 1788; https://doi.org/10.3390/s21051788 - 4 Mar 2021
Cited by 5 | Viewed by 4045
Abstract
Parkinson’s disease (PD) is the second most common progressive neurodegenerative disorder affecting patient functioning and quality of life. Aside from the motor symptoms of PD, cognitive impairment may occur at early stages of PD and has a substantial impact on patient emotional and [...] Read more.
Parkinson’s disease (PD) is the second most common progressive neurodegenerative disorder affecting patient functioning and quality of life. Aside from the motor symptoms of PD, cognitive impairment may occur at early stages of PD and has a substantial impact on patient emotional and physical health. Detecting these early signs through actual daily functioning while the patient is still functionally independent is challenging. We developed DailyCog—a smartphone application for the detection of mild cognitive impairment. DailyCog includes an environment that simulates daily tasks, such as making a drink and shopping, as well as a self-report questionnaire related to daily events performed at home requiring executive functions and visual–spatial abilities, and psychomotor speed. We present the detailed design of DailyCog and discuss various considerations that influenced the design. We tested DailyCog on patients with mild cognitive impairment in PD. Our case study demonstrates how the markers we used coincide with the cognitive levels of the users. We present the outcome of our usability study that found that most users were able to use our app with ease, and provide details on how various features were used, along with some of the difficulties that were identified. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

17 pages, 4181 KiB  
Article
NMN-VD: A Neural Module Network for Visual Dialog
by Yeongsu Cho and Incheol Kim
Sensors 2021, 21(3), 931; https://doi.org/10.3390/s21030931 - 30 Jan 2021
Cited by 4 | Viewed by 2882
Abstract
Visual dialog demonstrates several important aspects of multimodal artificial intelligence; however, it is hindered by visual grounding and visual coreference resolution problems. To overcome these problems, we propose the novel neural module network for visual dialog (NMN-VD). NMN-VD is an efficient question-customized modular [...] Read more.
Visual dialog demonstrates several important aspects of multimodal artificial intelligence; however, it is hindered by visual grounding and visual coreference resolution problems. To overcome these problems, we propose the novel neural module network for visual dialog (NMN-VD). NMN-VD is an efficient question-customized modular network model that combines only the modules required for deciding answers after analyzing input questions. In particular, the model includes a Refer module that effectively finds the visual area indicated by a pronoun using a reference pool to solve a visual coreference resolution problem, which is an important challenge in visual dialog. In addition, the proposed NMN-VD model includes a method for distinguishing and handling impersonal pronouns that do not require visual coreference resolution from general pronouns. Furthermore, a new Compare module that effectively handles comparison questions found in visual dialogs is included in the model, as well as a Find module that applies a triple-attention mechanism to solve visual grounding problems between the question and the image. The results of various experiments conducted using a set of large-scale benchmark data verify the efficacy and high performance of our proposed NMN-VD model. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

15 pages, 6251 KiB  
Article
Using a Stochastic Agent Model to Optimize Performance in Divergent Interest Tacit Coordination Games
by Dor Mizrahi, Inon Zuckerman and Ilan Laufer
Sensors 2020, 20(24), 7026; https://doi.org/10.3390/s20247026 - 8 Dec 2020
Cited by 16 | Viewed by 2273
Abstract
In recent years collaborative robots have become major market drivers in industry 5.0, which aims to incorporate them alongside humans in a wide array of settings ranging from welding to rehabilitation. Improving human–machine collaboration entails using computational algorithms that will save processing as [...] Read more.
In recent years collaborative robots have become major market drivers in industry 5.0, which aims to incorporate them alongside humans in a wide array of settings ranging from welding to rehabilitation. Improving human–machine collaboration entails using computational algorithms that will save processing as well as communication cost. In this study we have constructed an agent that can choose when to cooperate using an optimal strategy. The agent was designed to operate in the context of divergent interest tacit coordination games in which communication between the players is not possible and the payoff is not symmetric. The agent’s model was based on a behavioral model that can predict the probability of a player converging on prominent solutions with salient features (e.g., focal points) based on the player’s Social Value Orientation (SVO) and the specific game features. The SVO theory pertains to the preferences of decision makers when allocating joint resources between themselves and another player in the context of behavioral game theory. The agent selected stochastically between one of two possible policies, a greedy or a cooperative policy, based on the probability of a player to converge on a focal point. The distribution of the number of points obtained by the autonomous agent incorporating the SVO in the model was better than the results obtained by the human players who played against each other (i.e., the distribution associated with the agent had a higher mean value). Moreover, the distribution of points gained by the agent was better than any of the separate strategies the agent could choose from, namely, always choosing a greedy or a focal point solution. To the best of our knowledge, this is the first attempt to construct an intelligent agent that maximizes its utility by incorporating the belief system of the player in the context of tacit bargaining. This reward-maximizing strategy selection process based on the SVO can also be potentially applied in other human–machine contexts, including multiagent systems. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

10 pages, 309 KiB  
Communication
SAIF: A Correction-Detection Deep-Learning Architecture for Personal Assistants
by Amos Azaria and Keren Nivasch
Sensors 2020, 20(19), 5577; https://doi.org/10.3390/s20195577 - 29 Sep 2020
Cited by 1 | Viewed by 2176
Abstract
Intelligent agents that can interact with users using natural language are becoming increasingly common. Sometimes an intelligent agent may not correctly understand a user command or may not perform it properly. In such cases, the user might try a second time by giving [...] Read more.
Intelligent agents that can interact with users using natural language are becoming increasingly common. Sometimes an intelligent agent may not correctly understand a user command or may not perform it properly. In such cases, the user might try a second time by giving the agent another, slightly different command. Giving an agent the ability to detect such user corrections might help it fix its own mistakes and avoid making them in the future. In this work, we consider the problem of automatically detecting user corrections using deep learning. We develop a multimodal architecture called SAIF, which detects such user corrections, taking as inputs the user’s voice commands as well as their transcripts. Voice inputs allow SAIF to take advantage of sound cues, such as tone, speed, and word emphasis. In addition to sound cues, our model uses transcripts to determine whether a command is a correction to the previous command. Our model also obtains internal input from the agent, indicating whether the previous command was executed successfully or not. Finally, we release a unique dataset in which users interacted with an intelligent agent assistant, by giving it commands. This dataset includes labels on pairs of consecutive commands, which indicate whether the latter command is in fact a correction of the former command. We show that SAIF outperforms current state-of-the-art methods on this dataset. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

Review

Jump to: Research

41 pages, 2803 KiB  
Review
Using Artificial Intelligence for Assistance Systems to Bring Motor Learning Principles into Real World Motor Tasks
by Koenraad Vandevoorde, Lukas Vollenkemper, Constanze Schwan, Martin Kohlhase and Wolfram Schenck
Sensors 2022, 22(7), 2481; https://doi.org/10.3390/s22072481 - 23 Mar 2022
Cited by 2 | Viewed by 5874
Abstract
Humans learn movements naturally, but it takes a lot of time and training to achieve expert performance in motor skills. In this review, we show how modern technologies can support people in learning new motor skills. First, we introduce important concepts in motor [...] Read more.
Humans learn movements naturally, but it takes a lot of time and training to achieve expert performance in motor skills. In this review, we show how modern technologies can support people in learning new motor skills. First, we introduce important concepts in motor control, motor learning and motor skill learning. We also give an overview about the rapid expansion of machine learning algorithms and sensor technologies for human motion analysis. The integration between motor learning principles, machine learning algorithms and recent sensor technologies has the potential to develop AI-guided assistance systems for motor skill training. We give our perspective on this integration of different fields to transition from motor learning research in laboratory settings to real world environments and real world motor tasks and propose a stepwise approach to facilitate this transition. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

48 pages, 1481 KiB  
Review
Conversational Agents: Goals, Technologies, Vision and Challenges
by Merav Allouch, Amos Azaria and Rina Azoulay
Sensors 2021, 21(24), 8448; https://doi.org/10.3390/s21248448 - 17 Dec 2021
Cited by 67 | Viewed by 14128
Abstract
In recent years, conversational agents (CAs) have become ubiquitous and are a presence in our daily routines. It seems that the technology has finally ripened to advance the use of CAs in various domains, including commercial, healthcare, educational, political, industrial, and personal domains. [...] Read more.
In recent years, conversational agents (CAs) have become ubiquitous and are a presence in our daily routines. It seems that the technology has finally ripened to advance the use of CAs in various domains, including commercial, healthcare, educational, political, industrial, and personal domains. In this study, the main areas in which CAs are successful are described along with the main technologies that enable the creation of CAs. Capable of conducting ongoing communication with humans, CAs are encountered in natural-language processing, deep learning, and technologies that integrate emotional aspects. The technologies used for the evaluation of CAs and publicly available datasets are outlined. In addition, several areas for future research are identified to address moral and security issues, given the current state of CA-related technological developments. The uniqueness of our review is that an overview of the concepts and building blocks of CAs is provided, and CAs are categorized according to their abilities and main application domains. In addition, the primary tools and datasets that may be useful for the development and evaluation of CAs of different categories are described. Finally, some thoughts and directions for future research are provided, and domains that may benefit from conversational agents are introduced. Full article
(This article belongs to the Special Issue Human-Computer Interaction in Smart Environments)
Show Figures

Figure 1

Back to TopTop