Evaluation of Emotional Satisfaction Using Questionnaires in Voice-Based Human–AI Interaction
Abstract
:1. Introduction
2. Materials and Methods
2.1. Voice-Based Intelligent System Design
2.2. Experiment Procedure
2.3. Participants
2.4. Data Collection
2.5. Statistical Method
3. Results
3.1. Exploratory Factor Analysis
3.2. Analysis of Variance
3.3. Classification of Emotional Satisfaction According to the Design Parameters
4. Discussion
4.1. Validity of the Kansei Engineering Approach
4.2. User Emotion Classification According to Design Parameters
4.3. Integration of Kansei Engineering Approach and Biosignals
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. ANOVA Tables for Design Parameters
Pleasurability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Response time | 3 | 1.373 | 0.458 | 0.45 | 0.715 |
Error | 176 | 177.627 | 1.009 | ||
Total | 179 | 179.000 | |||
Reliability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Response time | 3 | 1.068 | 0.356 | 0.35 | 0.788 |
Error | 176 | 177.932 | 1.011 | ||
Total | 179 | 179.000 |
Pleasurability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Number of trials | 2 | 23.72 | 11.859 | 13.52 | 0.000 ** |
Error | 177 | 155.28 | 0.877 | ||
Total | 179 | 179.000 | |||
Reliability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Number of trials | 2 | 20.96 | 10.482 | 11.74 | 0.000 ** |
Error | 177 | 158.04 | 0.8929 | ||
Total | 179 | 179.000 |
Pleasurability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Pace of the answer | 2 | 1.793 | 0.897 | 0.9 | 0.41 |
Error | 177 | 177.207 | 1.0012 | ||
Total | 179 | 179.000 | |||
Reliability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Pace of the answer | 2 | 2.187 | 1.094 | 1.09 | 0.337 |
Error | 177 | 176.813 | 0.999 | ||
Total | 179 | 179.000 |
Pleasurability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Sentence structure | 2 | 11.05 | 5.527 | 5.83 | 0.004 ** |
Error | 177 | 167.95 | 0.949 | ||
Total | 179 | 179.000 | |||
Reliability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Sentence structure | 2 | 9.884 | 4.942 | 5.17 | 0.007 ** |
Error | 177 | 169.116 | 0.955 | ||
Total | 179 | 179.000 |
Pleasurability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Number of trials | 2 | 23.718 | 11.859 | 14.22 | 0.000 ** |
Sentence structure | 2 | 11.055 | 5.527 | 6.63 | 0.002 ** |
Trials X Structure | 4 | 1.608 | 0.402 | 0.048 | 0.749 |
Error | 171 | 142.619 | 0.834 | ||
Total | 179 | 179.000 | |||
Reliability | |||||
Source | DF | Adj SS | Adj MS | F-Value | p-Value |
Number of trials | 2 | 20.964 | 10.482 | 12.21 | 0.000 ** |
Sentence structure | 2 | 9.884 | 4.942 | 5.76 | 0.004 ** |
Trials X Structure | 4 | 1.375 | 0.345 | 0.4 | 0.808 |
Error | 171 | 146.777 | 0.858 | ||
Total | 179 | 179.000 |
References
- Mhatre, O.; Barshe, S.; Jadhav, S. Invoice: Intelligent Voice System for Mobile Phones. Int. J. Innov. Adv. Comput. Sci. 2018, 7, 153–160. [Google Scholar]
- Choe, J.H.; Kim, H.T. A Survey Study on the Utilization Status and User Perception of the VUI of Smartphones. J. Soc. e-Bus. Stud. 2017, 21, 29–40. [Google Scholar] [CrossRef] [Green Version]
- Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, Scotland, 4–9 May 2019; pp. 1–13. [Google Scholar]
- Pandita, R.; Bucuvalas, S.; Bergier, H.; Chakarov, A.; Richards, E. Towards JARVIS for Software Engineering: Lessons Learned in Implementing a Natural Language Chat Interface. In Proceedings of the Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA, 2–7 February 2018; pp. 779–782. [Google Scholar]
- Purington, A.; Taft, J.G.; Sannon, S.; Bazarova, N.N.; Taylor, S.H. Alexa is my new BFF: Social Roles, User Satisfaction, and Personification of the Amazon Echo. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, Denver, CO, USA, May 2017; pp. 2853–2859. [Google Scholar]
- Cárdenas, J.F.S.; Shin, J.G.; Kim, S.H. A Few Critical Human Factors for Developing Sustainable Autonomous Driving Technology. Sustainability 2020, 12, 3030. [Google Scholar] [CrossRef] [Green Version]
- Walter, A. Designing for Emotion, 2nd ed.; A Book Apart: New York, NY, USA, 2011; pp. 1–978. [Google Scholar]
- Hwang, H.J.; Kim, S.; Choi, S.; Im, C.H. EEG-based brain-computer interfaces: A thorough literature survey. Int. J. Hum. Comput. Interact. 2013, 29, 814–826. [Google Scholar] [CrossRef]
- Pan, J.; Xie, Q.; Huang, H.; He, Y.; Sun, Y.; Yu, R.; Li, Y. Emotion-related consciousness detection in patients with disorders of consciousness through an EEG-based BCI system. Front. Hum. Neurosci. 2018, 12, 198. [Google Scholar] [CrossRef] [Green Version]
- Al-Nafjan, A.; Hosny, M.; Al-Wabil, A.; Al-Ohali, Y. Classification of human emotions from electroencephalogram (EEG) signal using deep neural network. Int. J. Adv. Comput. Sci. Appl. 2017, 8, 419–425. [Google Scholar] [CrossRef]
- Katona, J.; Kovari, A. A Brain–Computer Interface Project Applied in Computer Engineering. IEEE Trans. Educ. 2016, 59, 319–326. [Google Scholar] [CrossRef]
- Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Electroencephalogram-based brain-computer interface for internet of robotic things. In Cognitive Infocommunications, Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2019; Volume 13, pp. 253–275. [Google Scholar]
- Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Speed control of Festo Robotino mobile robot using NeuroSky MindWave EEG headset based brain-computer interface. In Proceedings of the 2016 7th IEEE international conference on cognitive infocommunications, Wroclaw, Poland, 16–18 October 2016; pp. 251–256. [Google Scholar]
- Katona, J. Examination and comparison of the EEG based Attention Test with CPT and TOVA. In Proceedings of the 2014 IEEE 15th International Symposium on Computational Intelligence and Informatics, Budapest, Hungary, 19–21 November 2014; pp. 117–120. [Google Scholar]
- Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Examine the effect of different web-based media on human brain waves. In Proceedings of the 2017 8th IEEE International Conference on Cognitive Infocommunications, Debrecen, Hungary, 11–14 September 2017; pp. 407–412. [Google Scholar]
- Katona, J.; Kovari, A. Examining the learning efficiency by a brain-computer interface system. Acta Polytech. Hung. 2018, 15, 251–280. [Google Scholar]
- Kovari, A.; Katona, J.; Costescu, C. Evaluation of eye-movement metrics in a software debugging task using gp3 eye tracker. Acta Polytech. Hung. 2020, 17, 57–76. [Google Scholar] [CrossRef]
- Nagamachi, M. Kansei engineering: A new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon. 1995, 15, 3–11. [Google Scholar] [CrossRef]
- Li, Y.; Shieh, M.D.; Yang, C.C. A posterior preference articulation approach to Kansei engineering system for product form design. Res. Eng. Des. 2019, 30, 3–19. [Google Scholar] [CrossRef]
- Kim, S.H.; Kim, S.A.; Shin, J.K.; Ahn, J.Y. A Human Sensibility Ergonomic Design for Developing Aesthetically and Emotionally Affecting Glass Panels of Changing Colors. J. Ergon. Soc. Korea 2016, 35, 535–550. [Google Scholar] [CrossRef] [Green Version]
- Daud, N.A.; Aminudin, N.I.; Redzuan, F.; Ashaari, N.S.; Muda, Z. Identification of persuasive elements in Islamic knowledge website using Kansei engineering. Bull. Electr. Eng. Inform. 2019, 8, 313–319. [Google Scholar] [CrossRef]
- Shin, J.G.; Kim, J.B.; Kim, S.H. A Framework to Identify Critical Design Parameters for Enhancing User’s Satisfaction in Human-AI Interactions. J. Phys. Conf. Ser. 2019, 1284, 237–243. [Google Scholar] [CrossRef]
- Kelley, J.F. An iterative design methodology for user-friendly natural language office information applications. ACM Trans. Inf. Syst. 1984, 2, 26–41. [Google Scholar] [CrossRef]
- Martin, B.; Hanington, B. Universal Methods of Design 100 Ways to Research Complex Problems, Develop Innovative Ideas, and Design Effective Solutions; Rockport Publishers: Beverly, MA, USA, 2012. [Google Scholar]
- Large, D.R.; Clark, L.; Quandt, A.; Burnett, G.; Skrypchuk, L. Steering the conversation: A linguistic exploration of natural language interactions with a digital assistant during simulated driving. Appl. Ergon. 2017, 63, 53–61. [Google Scholar] [CrossRef]
- Howland, K.; Jackson, J. Investigating Conversational Programming for End-Users in Smart Environments through Wizard of Oz Interactions. In Proceedings of the Psychology of Programming Interest Group—29th Annual Workshop, London, UK, 5–7 September 2018; pp. 107–110. [Google Scholar]
- Dybkjær, L.; Minker, W. Recent Trends in Discourse and Dialogue; Springer Science & Business Media: Berlin, Germany, 2008. [Google Scholar]
- Farinazzo, V.; Salvador, M.; Kawamoto, A.L.S.; de Oliveira Neto, J.S. An empirical approach for the evaluation of voice user interfaces User Interfaces. IntechOpen 2010, 153–164. [Google Scholar] [CrossRef]
- Kouroupetroglou, G.; Spiliotopoulos, D. Usability Methodologies for Real-Life Voice User Interfaces. Int. J. Inf. Technol. Web Eng. 2009, 4, 78–94. [Google Scholar] [CrossRef] [Green Version]
- Lee, M.J.; Hong, K.H. Design and implementation of a usability testing tool for user-oriented design of command and control voice user interfaces. Phon. Speech Sci. 2011, 3, 79–87. [Google Scholar]
- Osgood, C.E.; Suci, G.J.; Tannenbaum, P.H. The Measurement of Meaning, 1st ed.; University of Illinois press: Urbana, IL, USA, 1957; pp. 18–30. [Google Scholar]
- Fabrigar, L.R.; Wegener, D.T.; MacCallum, R.C.; Strahan, E.J. Evaluating the use of exploratory factor analysis in psychological research. Psychol. Methods 1999, 4, 272–299. [Google Scholar] [CrossRef]
- Eckman, P. Universal and cultural differences in facial expression of emotion. Neb. Symp. Motiv. 1972, 19, 207–283. [Google Scholar]
- Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
- Pyae, A.; Joelsson, T.N. Investigating the usability and user experiences of voice user interface: A case of Google home smart speaker. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, Barcelona, Spain, 3–6 September 2018; pp. 127–131. [Google Scholar]
- Moscato, V.; Picariello, A.; Sperli, G. An emotional recommender system for music. IEEE Intell. Syst. 2020. [Google Scholar] [CrossRef]
- Amato, F.; Moscato, V.; Picariello, A.; Sperlí, G. Recommendation in social media networks. In Proceedings of the 2017 IEEE Third International Conference on Multimedia Big Data, Laguna Hills, CA, USA, 19–21 April 2017; pp. 213–216. [Google Scholar]
- Liu, Y.J.; Yu, M.; Zhao, G.; Song, J.; Ge, Y.; Shi, Y. Real-time movie-induced discrete emotion recognition from EEG signals. IEEE Trans. Affect. Comput. 2017, 9, 550–562. [Google Scholar] [CrossRef]
- Ayata, D.; Yaslan, Y.; Kamasak, M.E. Emotion Recognition from Multimodal Physiological Signals for Emotion Aware Healthcare Systems. J. Med Biol. Eng. 2020, 40, 1–9. [Google Scholar] [CrossRef] [Green Version]
- McFarland, D.J.; Parvaz, M.A.; Sarnacki, W.A.; Goldstein, R.Z.; Wolpaw, J.R. Prediction of subjective ratings of emotional pictures by EEG features. J. Neural Eng. 2016, 14, 016009. [Google Scholar] [CrossRef] [PubMed]
- Costa, A.; Rincon, J.A.; Carracsosa, C.; Julian, V.; Novais, P. Emotions detection on an ambient intelligent system using wearable devices. Future Gener. Comput. Syst. 2019, 92, 479–489. [Google Scholar] [CrossRef] [Green Version]
- Shin, J.G.; Jo, I.G.; Lim, W.S.; Kim, S.H. A Few Critical Design Parameters Affecting User’s Satisfaction in Interaction with Voice User Interface of AI-Infused Systems. J. Ergon. Soc. Korea 2020, 39, 73–86. [Google Scholar] [CrossRef]
Design Parameters | Scope |
---|---|
Response time to get to the next interaction | Between 1 and 5 s |
Number of trials to get the right answer for a question | 1 trial 2 trials 3 trials |
Pace of the answer | 4 syllables/s 6 syllables/s 8 syllables/s |
Sentence structure of the answer | Answer only Repeat the question and then answer Repeat the question and then answer with a clear reference source |
No | Positive | Negative | No | Positive | Negative |
---|---|---|---|---|---|
1 | Refreshing | Uncomfortable | 16 | Clever | Silly |
2 | Interesting | Indifferent | 17 | Proper | Ridiculous |
3 | Anticipated | Broken hearted | 18 | Spontaneous | Awkward |
4 | Glad | Perplexed | 19 | Suitable | Inappropriate |
5 | Natural | Abrupt | 20 | Good | Irritated |
6 | Confident | Anxiety | 21 | Pleasant | Boring |
7 | Detailed | Sloppy | 22 | Concentrated | Dejected |
8 | Perspicuous | Extensive | 23 | Organized | Confused |
9 | Reliable | Unreliable | 24 | Friendly | Picky |
10 | Bracing | Unpleasant | 25 | Lighthearted | Bothersome |
11 | Simple | Vague | 26 | Obvious | Suspicious |
12 | Easy | Difficult | 27 | Feel Unburdened | Stuffy |
13 | Fresh | Trite | 28 | Satisfied | Insufficient |
14 | Reassured | Concerned | 29 | Attractive | Banal |
15 | Stable | Precarious | 30 | Hopeful | Hopeless |
Kansei Word | Factor 1 | Kansei Word | Factor 2 |
---|---|---|---|
Natural–Abrupt | 0.804 | Lighthearted–Bothersome | 0.789 |
Bracing–Unpleasant | 0.783 | Interesting–Indifferent | 0.768 |
Reassured–Concerned | 0.751 | Hopeful–Broken hearted | 0.744 |
Refreshing–Uncomfortable | 0.739 | Detailed–Sloppy | 0.744 |
Spontaneous–Awkward | 0.690 | Friendly–Picky | 0.734 |
Reliable–Unreliable | 0.679 | Feel Unburdened–Stuffy | 0.728 |
Good–Irritated | 0.677 | Satisfied–Insufficient | 0.685 |
Concentrated–Dejected | 0.674 | Suitable–Inappropriate | 0.680 |
Glad–Perplexed | 0.658 | Pleasant–Boring | 0.655 |
Attractive–Banal | 0.646 | ||
Variance | 10.253 | Variance | 9.233 |
% Variance | 0.342 | % Variance | 0.308 |
Pleasurability | Reliability |
Pleasurability | Reliability | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Number of Trials | Sentence Structure | Avg | Cluster | Number of Trials | Sentence Structure | Avg | Cluster | |||||
1 Trial | Q + A + Ref | 0.88 | A | 1 Trial | Repeat Q + A | 0.611 | A | |||||
1 Trial | Repeat Q + A | 0.501 | A | B | 1 Trial | Q + A + Ref | 0.561 | A | B | |||
2 Trials | Q + A + Ref | 0.077 | A | B | C | 2 Trials | Q + A + Ref | 0.326 | A | B | C | |
1 Trial | Answer only | 0.032 | A | B | C | 2 Trials | Repeat Q + A | 0.235 | A | B | C | D |
2 Trials | Repeat Q + A | −0.035 | B | C | 1 Trial | Answer only | −0.053 | A | B | C | D | |
3 Trials | Q + A + Ref | −0.111 | B | C | 2 Trials | Answer only | −0.325 | B | C | D | ||
2 Trials | Answer only | −0.219 | B | C | 3 Trials | Q + A + Ref | −0.358 | B | C | D | ||
3 Trials | Repeat Q + A | −0.349 | B | C | 3 Trials | Repeat Q + A | −0.382 | C | D | |||
3 Trials | Answer only | −0.776 | C | 3 Trials | Answer only | −0.615 | D |
Level of Satisfaction | Design Parameters |
---|---|
High Satisfaction | 1 Trial, Repeat the question and then answer with a clear reference source 1 Trial, Repeat the question and then answer |
Mid-High Satisfaction | 1 Trial, answer only 2 Trials, Repeat the question and then answer with a clear reference source 2 Trials, Repeat the question and then answer |
Mid-Low Satisfaction | 2 Trials, answer only 3 Trials, Repeat the question and then answer with a clear reference source 3 Trials, Repeat the question and then answer |
Low Satisfaction | 3 Trials, answer only |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Shin, J.-G.; Choi, G.-Y.; Hwang, H.-J.; Kim, S.-H. Evaluation of Emotional Satisfaction Using Questionnaires in Voice-Based Human–AI Interaction. Appl. Sci. 2021, 11, 1920. https://doi.org/10.3390/app11041920
Shin J-G, Choi G-Y, Hwang H-J, Kim S-H. Evaluation of Emotional Satisfaction Using Questionnaires in Voice-Based Human–AI Interaction. Applied Sciences. 2021; 11(4):1920. https://doi.org/10.3390/app11041920
Chicago/Turabian StyleShin, Jong-Gyu, Ga-Young Choi, Han-Jeong Hwang, and Sang-Ho Kim. 2021. "Evaluation of Emotional Satisfaction Using Questionnaires in Voice-Based Human–AI Interaction" Applied Sciences 11, no. 4: 1920. https://doi.org/10.3390/app11041920
APA StyleShin, J. -G., Choi, G. -Y., Hwang, H. -J., & Kim, S. -H. (2021). Evaluation of Emotional Satisfaction Using Questionnaires in Voice-Based Human–AI Interaction. Applied Sciences, 11(4), 1920. https://doi.org/10.3390/app11041920