Coordination of Speaking Opportunities in Virtual Reality: Analyzing Interaction Dynamics and Context-Aware Strategies
Abstract
:1. Introduction
- R1: What interaction factors significantly influence participants’ ability to successfully acquire speaking turns in VR communication?
- R2: How do the challenges participants face in acquiring speaking turns vary across different interaction scenarios?
2. Related Work
3. Methods
3.1. Dataset
3.2. Turn-Taking Failures
- Obstructed: The speaking intention did not result in speaking behavior, or the turn initiation involved overlapping speech.
- Unobstructed: The speaking intention successfully transitioned into speaking behavior, and the turn initiation was characterized by non-overlapping, smooth turn transitions.
3.3. Interaction Dynamic Features
4. Results
4.1. Regression Analysis
4.2. Cluster Analysis
- C1—Prolonged Single-turn Scenario: This cluster is characterized by the highest proportion of speaking duration but a low number of speaking turns and almost no speech overlap. Participants wishing to speak hold a relatively low status in the current interaction. This suggests that the interactions in this cluster are primarily dominated by long monologues from a few participants, with limited group interaction, leaning toward a unidirectional information-sharing pattern.
- C2—High-status Role Scenario: This cluster exhibits moderate levels of speaking duration and speaking turns, with a low proportion of speech overlap. A notable feature is that participants wishing to speak have a relatively high status. This reflects interactions where participants expressing speaking intentions are closely tied to the ongoing exchange and hold a relatively dominant role.
- C3—Intense Interaction Scenario: This cluster features the highest rate of speech overlap and speaking turns, with a speaking duration proportion close to the highest. Participants wishing to speak have high status, and the speaking intentions of other participants are also strong. This type of interaction demonstrates a highly intense interactive state, where group members frequently speak, indicating high engagement and interaction density.
- C4—Low-activity Scenario: This cluster is characterized by long periods without speech activity, with participants speaking infrequently, resulting in an overall silent state. This indicates a “cold” interaction phase where group members exhibit low willingness to participate, lacking active speaking and engagement.
- C5—High-competition Scenario: This cluster shows the highest number of speaking intentions, moderate levels of speaking duration and speaking turns, and a low proportion of speech overlap. Such interactions likely occur when a topic of broad interest prompts participants to express speaking intentions collectively, resulting in turn-taking competition and reflecting a high level of interaction demand.
4.3. Survival Analysis
5. Discussion
5.1. Key Features and Predictions of Turn-Taking Coordination Failures
5.2. Impacts of Different Interaction Scenarios on Turn-Taking and System Support Strategies
5.3. Limitations and Future Work
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Jerald, J. The VR Book: Human-Centered Design for Virtual Reality; Morgan & Claypool: San Rafael, CA, USA, 2015. [Google Scholar]
- Yassien, A.; ElAgroudy, P.; Makled, E.; Abdennadher, S. A design space for social presence in VR. In Proceedings of the 11th Nordic Conference on Human-Computer Interaction: Shaping Experiences, Shaping Society, Tallinn, Estonia, 25–29 October 2020; pp. 1–12. [Google Scholar]
- Barreda-Ángeles, M.; Horneber, S.; Hartmann, T. Easily applicable social virtual reality and social presence in online higher education during the covid-19 pandemic: A qualitative study. Comput. Educ. X Real. 2023, 2, 100024. [Google Scholar] [CrossRef]
- Steinicke, F.; Lehmann-Willenbrock, N.; Meinecke, A.L. A first pilot study to compare virtual group meetings using video conferences and (immersive) virtual reality. In Proceedings of the 2020 ACM Symposium on Spatial User Interaction, Virtual Event, 30 October–1 November 2020; pp. 1–2. [Google Scholar]
- Yoshimura, A.; Borst, C.W. Remote Instruction in Virtual Reality: A Study of Students Attending Class Remotely from Home with VR Headsets. In Mensch und Computer 2020—Workshopband; Gesellschaft für Informatik e.V.: Bonn, Germany, 2020. [Google Scholar] [CrossRef]
- Tanenbaum, T.J.; Hartoonian, N.; Bryan, J. “How do I make this thing smile?” An Inventory of Expressive Nonverbal Communication in Commercial Social Virtual Reality Platforms. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–13. [Google Scholar]
- Williamson, J.R.; O’Hagan, J.; Guerra-Gomez, J.A.; Williamson, J.H.; Cesar, P.; Shamma, D.A. Digital proxemics: Designing social and collaborative interaction in virtual environments. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–12. [Google Scholar]
- Maloney, D.; Freeman, G.; Wohn, D.Y. “Talking without a Voice” Understanding Non-verbal Communication in Social Virtual Reality. Proc. ACM Hum.-Comput. Interact. 2020, 4, 1–25. [Google Scholar] [CrossRef]
- Ishii, R.; Kumano, S.; Otsuka, K. Prediction of next-utterance timing using head movement in multi-party meetings. In Proceedings of the 5th International Conference on Human Agent Interaction, Bielefeld, Germany, 17–20 October 2017; pp. 181–187. [Google Scholar]
- Moustafa, F.; Steed, A. A longitudinal study of small group interaction in social virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, Tokyo, Japan, 28 November–1 December 2018; pp. 1–10. [Google Scholar]
- Williamson, J.; Li, J.; Vinayagamoorthy, V.; Shamma, D.A.; Cesar, P. Proxemics and social interactions in an instrumented virtual reality workshop. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–13. [Google Scholar]
- Ishii, R.; Kumano, S.; Otsuka, K. Analyzing mouth-opening transition pattern for predicting next speaker in multi-party meetings. In Proceedings of the 18th ACM International Conference on Multimodal Interaction, Tokyo, Japan, 12–16 November 2016; pp. 209–216. [Google Scholar]
- Lee, M.C.; Trinh, M.; Deng, Z. Multimodal Turn Analysis and Prediction for Multi-party Conversations. In Proceedings of the 25th International Conference on Multimodal Interaction, Paris, France, 9–13 October 2023; pp. 436–444. [Google Scholar]
- Li, L.; Molhoek, J.; Zhou, J. Inferring Intentions to Speak Using Accelerometer Data In-the-Wild. arXiv 2024, arXiv:2401.05849. [Google Scholar]
- Chen, J.; Gu, C.; Zhang, J.; Liu, Z.; Konomi, S. Sensing the Intentions to Speak in VR Group Discussions. Sensors 2024, 24, 362. [Google Scholar] [CrossRef]
- Sacks, H.; Schegloff, E.A.; Jefferson, G. A simplest systematics for the organization of turn-taking for conversation. Language 1974, 50, 696–735. [Google Scholar] [CrossRef]
- Duncan, S. Some signals and rules for taking speaking turns in conversations. J. Personal. Soc. Psychol. 1972, 23, 283–292. [Google Scholar] [CrossRef]
- Ford, C.E.; Thompson, S.A. Interactional units in conversation: Syntactic, intonational, and pragmatic resources for the management of turns. Stud. Interact. Socioling. 1996, 13, 134–184. [Google Scholar]
- Grosjean, F. Using prosody to predict the end of sentences in English and French: Normal and brain-damaged subjects. Lang. Cogn. Process. 1996, 11, 107–134. [Google Scholar] [CrossRef]
- De Ruiter, J.P.; Mitterer, H.; Enfield, N.J. Projecting the end of a speaker’s turn: A cognitive cornerstone of conversation. Language 2006, 82, 515–535. [Google Scholar] [CrossRef]
- Novick, D.G.; Hansen, B.; Ward, K. Coordinating turn-taking with gaze. In Proceedings of the Fourth International Conference on Spoken Language Processing, ICSLP’96, Philadelphia, PA, USA, 3–6 October 1996; Volume 3, pp. 1888–1891. [Google Scholar]
- Streeck, J.; Hartge, U. Previews: Gestures at the transition place. In The Contextualization of Language; John Benjamins Publishing Company: Amsterdam, The Netherlands, 1992; pp. 135–157. [Google Scholar]
- Argyle, M.; Cook, M.; Cramer, D. Gaze and mutual gaze. Br. J. Psychiatry 1994, 165, 848–850. [Google Scholar] [CrossRef]
- Jokinen, K.; Nishida, M.; Yamamoto, S. On eye-gaze and turn-taking. In Proceedings of the 2010 Workshop on Eye Gaze in Intelligent Human Machine Interaction, Hong Kong, China, 7 February 2010; pp. 118–123. [Google Scholar]
- Jokinen, K.; Furukawa, H.; Nishida, M.; Yamamoto, S. Gaze and turn-taking behavior in casual conversational interactions. ACM Trans. Interact. Intell. Syst. (TIIS) 2013, 3, 1–30. [Google Scholar] [CrossRef]
- Petukhova, V.; Bunt, H. Who’s next? Speaker-selection mechanisms in multiparty dialogue. In Proceedings of the Workshop on the Semantics and Pragmatics of Dialogue, Stockholm, Sweden, 24–26 June 2009; pp. 19–26. [Google Scholar]
- Seuren, L.M.; Wherton, J.; Greenhalgh, T.; Shaw, S.E. Whose turn is it anyway? Latency and the organization of turn-taking in video-mediated interaction. J. Pragmat. 2021, 172, 63–78. [Google Scholar] [CrossRef] [PubMed]
- Akselrad, D.; DeVeaux, C.; Han, E.; Miller, M.R.; Bailenson, J.N. Body crumple, sound intrusion, and embodiment violation: Toward a framework for miscommunication in VR. In Proceedings of the Companion Publication of the 2023 Conference on Computer Supported Cooperative Work and Social Computing, Minneapolis, MN, USA, 14–18 October 2023; pp. 122–125. [Google Scholar]
- Garau, M.; Slater, M.; Vinayagamoorthy, V.; Brogni, A.; Steed, A.; Sasse, M.A. The impact of avatar realism and eye gaze control on perceived quality of communication in a shared immersive virtual environment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Ft. Lauderdale, FL, USA, 5–10 April 2003; pp. 529–536. [Google Scholar]
- Vertegaal, R.; Van der Veer, G.; Vons, H. Effects of gaze on multiparty mediated communication. In Proceedings of the Graphics Interface, Montréal, QC, Canada, 15–17 May 2000; pp. 95–102. [Google Scholar]
- Vertegaal, R.; Weevers, I.; Sohn, C.; Cheung, C. Gaze-2: Conveying eye contact in group video conferencing using eye-controlled camera direction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Ft. Lauderdale, FL, USA, 5–10 April 2003; pp. 521–528. [Google Scholar]
- O’Conaill, B.; Whittaker, S.; Wilbur, S. Conversations over video conferences: An evaluation of the spoken aspects of video-mediated communication. Hum.-Comput. Interact. 1993, 8, 389–428. [Google Scholar] [CrossRef]
- Whittaker, S. Theories and methods in mediated communication: Steve whittaker. In Handbook of Discourse Processes; Routledge: London, UK, 2003; pp. 246–289. [Google Scholar]
- McVeigh-Schultz, J.; Kolesnichenko, A.; Isbister, K. Shaping pro-social interaction in VR: An emerging design framework. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–12. [Google Scholar]
- Woolley, A.W.; Chabris, C.F.; Pentland, A.; Hashmi, N.; Malone, T.W. Evidence for a collective intelligence factor in the performance of human groups. Science 2010, 330, 686–688. [Google Scholar] [CrossRef] [PubMed]
- Geimer, J.L.; Leach, D.J.; DeSimone, J.A.; Rogelberg, S.G.; Warr, P.B. Meetings at work: Perceived effectiveness and recommended improvements. J. Bus. Res. 2015, 68, 2015–2026. [Google Scholar] [CrossRef]
- Kocsis, D.J.; de Vreede, G.J.; Briggs, R.O. Designing and Executing Effective Meetings with Codified Best Facilitation Practices. In The Cambridge Handbook of Meeting Science; Cambridge Handbooks in Psychology; Cambridge University Press: Cambridge, UK, 2015; pp. 483–503. [Google Scholar]
- Yoshimura, A.; Borst, C.W. Evaluation and Comparison of Desktop Viewing and Headset Viewing of Remote Lectures in VR with Mozilla Hubs. In Proceedings of the ICAT-EGVE 2020—International Conference on Artificial Reality and Telexistence and Eurographics Symposium on Virtual Environments; Argelaguet, F., McMahan, R., Sugimoto, M., Eds.; The Eurographics Association: Liverpool, UK, 2020. [Google Scholar] [CrossRef]
- Hu, E.; Grønbæk, J.E.S.; Houck, A.; Heo, S. Openmic: Utilizing proxemic metaphors for conversational floor transitions in multiparty video meetings. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April2023; pp. 1–17. [Google Scholar]
- He, Z.; Wang, K.; Feng, B.Y.; Du, R.; Perlin, K. Gazechat: Enhancing virtual conferences with gaze-aware 3d photos. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, 10–14 October 2021; pp. 769–782. [Google Scholar]
- Lou, J.; Wang, Y.; Nduka, C.; Hamedi, M.; Mavridou, I.; Wang, F.Y.; Yu, H. Realistic facial expression reconstruction for VR HMD users. IEEE Trans. Multimed. 2019, 22, 730–743. [Google Scholar] [CrossRef]
- Kurzweg, M.; Reinhardt, J.; Nabok, W.; Wolf, K. Using body language of avatars in vr meetings as communication status cue. In Proceedings of Mensch und Computer 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 366–377. [Google Scholar]
- Li, J.V.; Kreminski, M.; Fernandes, S.M.; Osborne, A.; McVeigh-Schultz, J.; Isbister, K. Conversation balance: A shared vr visualization to support turn-taking in meetings. In Proceedings of the CHI Conference on Human Factors in Computing Systems Extended Abstracts, New Orleans, LA, USA, 29 April–5 May 2022; pp. 1–4. [Google Scholar]
- Ishii, R.; Otsuka, K.; Kumano, S.; Yamato, J. Using respiration to predict who will speak next and when in multiparty meetings. ACM Trans. Interact. Intell. Syst. (TIIS) 2016, 6, 1–20. [Google Scholar] [CrossRef]
- Ishii, R.; Otsuka, K.; Kumano, S.; Yamato, J. Prediction of who will be the next speaker and when using gaze behavior in multiparty meetings. ACM Trans. Interact. Intell. Syst. (TIIS) 2016, 6, 1–31. [Google Scholar] [CrossRef]
- Mizuno, S.; Hojo, N.; Kobashikawa, S.; Masumura, R. Next-speaker prediction based on non-verbal information in multi-party video conversation. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
- Ishii, R.; Otsuka, K.; Kumano, S.; Yamato, J. Predicting Who Will Be the Next Speaker and When in Multi-party Meetings. NTT Tech. Rev. 2015, 13, 37–47. [Google Scholar] [CrossRef]
- Wang, P.; Han, E.; Queiroz, A.; DeVeaux, C.; Bailenson, J.N. Predicting and Understanding Turn-Taking Behavior in Open-Ended Group Activities in Virtual Reality. arXiv 2024, arXiv:2407.02896. [Google Scholar]
- Gu, C.; Chen, J.; Zhang, J.; Yang, T.; Liu, Z.; Konomi, S. Detecting Leadership Opportunities in Group Discussions Using Off-the-Shelf VR Headsets. Sensors 2024, 24, 2534. [Google Scholar] [CrossRef] [PubMed]
- Cha, S.H.; Koo, C.; Kim, T.W.; Hong, T. Spatial perception of ceiling height and type variation in immersive virtual environments. Build. Environ. 2019, 163, 106285. [Google Scholar] [CrossRef]
- Heydarian, A.; Carneiro, J.P.; Gerber, D.; Becerik-Gerber, B.; Hayes, T.; Wood, W. Immersive virtual environments versus physical built environments: A benchmarking study for building design and user-built environment explorations. Autom. Constr. 2015, 54, 116–126. [Google Scholar] [CrossRef]
- Valtchanov, D.; Barton, K.R.; Ellard, C. Restorative Effects of Virtual Nature Settings. Cyberpsychol. Behav. Soc. Netw. 2010, 13, 503–512. [Google Scholar] [CrossRef] [PubMed]
- Miller, M.R.; Sonalkar, N.; Mabogunje, A.; Leifer, L.; Bailenson, J. Synchrony within triads using virtual reality. Proc. ACM Hum.-Comput. Interact. 2021, 5, 1–27. [Google Scholar] [CrossRef]
- Banakou, D.; Chorianopoulos, K. The effects of avatars’ gender and appearance on social behavior in virtual worlds. J. Virtual Worlds Res. 2010, 2, 3–16. [Google Scholar] [CrossRef]
- Wu, S.; Xu, L.; Dai, Z.; Pan, Y. Factors affecting avatar customization behavior in virtual environments. Electronics 2023, 12, 2286. [Google Scholar] [CrossRef]
- ThoughtCo. Two Truths and a Lie: How to Play. Available online: https://www.thoughtco.com/2-truths-lie-idea-list-1-31144 (accessed on 17 December 2024).
- Zwaagstra, L. Group Dynamics and Initiative Activities with Outdoor Programs. In Back to the Basics: Proceedings of the International Conference on Outdoor Recreation and Education; Association of Outdoor Recreation and Education: Boulder, CO, USA, 1997; p. 12. [Google Scholar]
- Yeganehpour, P. The effect of using different kinds of ice-breakers on upperintermediate language learners’ speaking ability. J. Int. Educ. Sci. 2016, 3, 217–238. [Google Scholar]
- Koiso, H.; Horiuchi, Y.; Tutiya, S.; Ichikawa, A.; Den, Y. An analysis of turn-taking and backchannels based on prosodic and syntactic features in Japanese map task dialogs. Lang. Speech 1998, 41, 295–321. [Google Scholar] [CrossRef]
- Maynard, S.K. Japanese Conversation: Self-Contextualization Through Structure and Interactional Management; Ablex Publishing Corporation: New York, NY, USA, 1989. [Google Scholar]
- Weiß, C. When gaze-selected next speakers do not take the turn. J. Pragmat. 2018, 133, 28–44. [Google Scholar] [CrossRef]
- Schegloff, E.A. Accounts of conduct in interaction: Interruption, overlap, and turn-taking. In Handbook of Sociological Theory; Springer: Boston, MA, USA, 2001; pp. 287–321. [Google Scholar]
- Hilton, K. The Perception of Overlapping Speech: Effects of Speaker Prosody and Listener Attitudes. In Proceedings of the Interspeech, San Francisco, CA, USA, 8–12 September 2016; pp. 1260–1264. [Google Scholar]
- Cafaro, A.; Glas, N.; Pelachaud, C. The effects of interrupting behavior on interpersonal attitude and engagement in dyadic interactions. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, Singapore, 9–13 May 2016; pp. 911–920. [Google Scholar]
- Fisek, M.H.; Berger, J.; Norman, R.Z. Participation in Heterogeneous and Homogeneous Groups: A Theoretical Integration. Am. J. Sociol. 1991, 97, 114–142. [Google Scholar] [CrossRef]
- Skvoretz, J. Models of participation in status-differentiated groups. Soc. Psychol. Q. 1988, 51, 43–57. [Google Scholar] [CrossRef]
- Gibson, D.R. Marking the turn: Obligation, engagement, and alienation in group discussions. Soc. Psychol. Q. 2010, 73, 132–151. [Google Scholar] [CrossRef]
- Hall, E.T. The Hidden Dimension, 1st ed.; Doubleday & Co.: New York, NY, USA, 1966. [Google Scholar]
- Latané, B.; Liu, J.H.; Nowak, A.; Bonevento, M.; Zheng, L. Distance matters: Physical space and social impact. Personal. Soc. Psychol. Bull. 1995, 21, 795–805. [Google Scholar] [CrossRef]
- Krikorian, D.H.; Lee, J.S.; Chock, T.M.; Harms, C. Isn’t that spatial?: Distance and communication in a 2-D virtual environment. J. Comput.-Mediat. Commun. 2000, 5, JCMC541. [Google Scholar] [CrossRef]
- Welsch, R.; Hecht, H.; Chuang, L.; von Castell, C. Interpersonal distance in the SARS-CoV-2 crisis. Hum. Factors 2020, 62, 1095–1101. [Google Scholar] [CrossRef]
- Bachour, K.; Kaplan, F.; Dillenbourg, P. An interactive table for supporting participation balance in face-to-face collaborative learning. IEEE Trans. Learn. Technol. 2010, 3, 203–213. [Google Scholar] [CrossRef]
- Kim, J.; Truong, K.P.; Charisi, V.; Zaga, C.; Lohse, M.; Heylen, D.; Evers, V. Vocal turn-taking patterns in groups of children performing collaborative tasks: An exploratory study. In Proceedings of the 16th Annual Conference of the International Speech Communication Association, INTERSPEECH 2015, Dresden, Germany, 6–10 September 2015; pp. 1645–1649. [Google Scholar]
- Stasser, G.; Taylor, L.A. Speaking turns in face-to-face discussions. J. Personal. Soc. Psychol. 1991, 60, 675–684. [Google Scholar] [CrossRef]
- Dabbs, J.M., Jr.; Ruback, R.B. Dimensions of group process: Amount and structure of vocal interaction. In Advances in Experimental Social Psychology; Elsevier: Amsterdam, The Netherlands, 1987; Volume 20, pp. 123–169. [Google Scholar]
- Duncan, S.; Fiske, D.W. Face-to-Face Interaction: Research, Methods, and Theory, 1st ed.; Routledge: London, UK, 1977. [Google Scholar] [CrossRef]
- Mast, M.S. Dominance as expressed and inferred through speaking time: A meta-analysis. Hum. Commun. Res. 2002, 28, 420–450. [Google Scholar] [CrossRef]
- Aran, O.; Gatica-Perez, D. One of a kind: Inferring personality impressions in meetings. In Proceedings of the 15th ACM on International Conference on Multimodal Interaction, Sydney, Australia, 9–13 December 2013; pp. 11–18. [Google Scholar]
- Lepri, B.; Subramanian, R.; Kalimeri, K.; Staiano, J.; Pianesi, F.; Sebe, N. Employing social gaze and speaking activity for automatic determination of the extraversion trait. In Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, Beijing, China, 8–10 November 2010; pp. 1–8. [Google Scholar]
Dimensions | Features | Description of Features |
---|---|---|
IS | Negative Logarithm of Speaking Interval | The negative logarithm of the time interval since the participant’s last speech, representing their positional difference in conversational dynamics. |
SR | Group Area | The area of the minimum bounding rectangle that encompasses all participants’ positions in the virtual environment during a speaking intention expression. |
IP | Speaking Duration Ratio | The proportion of time, within the analysis window, where at least one participant is speaking, indicating overall vocal activity. |
Utterance Count | The total number of utterances within the analysis window, measuring the frequency of speech events. | |
Speaking Intention Count | The number of speaking intention tags from other participants within the analysis window, reflecting the level of competition for speaking turns. | |
CQ | Max Speaker Duration Ratio | The ratio of the longest speaker’s speaking time to the total speaking time, indicating conversational balance or dominance. |
Speaking Overlap Ratio | The proportion of overlapping speaking time to the total speaking time, reflecting the degree of conversational disorder. |
Features | Mean | Std | Min | Median | Max |
---|---|---|---|---|---|
Neg Log Speaking Interval | −2.563 | 1.463 | −5.710 | −2.617 | 2.900 |
Group Area | 15.258 | 7.403 | 0.768 | 17.067 | 32.755 |
Speaking Duration Ratio | 0.476 | 0.364 | 0.000 | 0.481 | 1.000 |
Utterance Count | 1.049 | 0.841 | 0.000 | 1.000 | 5.000 |
Speaking Intention Count | 0.169 | 0.396 | 0.000 | 0.000 | 2.000 |
Max Speaker Duration Ratio | 0.729 | 0.416 | 0.000 | 1.000 | 1.000 |
Speaking Overlap Ratio | 0.043 | 0.138 | 0.000 | 0.000 | 0.889 |
Dimensions | Features | coef () | std err | z | p-Value | VIF |
---|---|---|---|---|---|---|
IS | Neg Log Speaking Interval | 0.2710 * | 0.110 | 2.470 | 0.014 | 1.076 |
SR | Group Area | −0.0852 | 0.108 | −0.790 | 0.429 | 1.020 |
IP | Speaking Duration Ratio | −0.9142 ** | 0.160 | −5.721 | 0.000 | 2.250 |
Utterance Count | −0.5211 ** | 0.191 | −2.733 | 0.006 | 2.992 | |
Speaking Intention Count | −0.5344 ** | 0.117 | −4.577 | 0.000 | 1.095 | |
CQ | Max Speaker Duration Ratio | 0.3258 | 0.167 | 1.948 | 0.051 | 2.078 |
Speaking Overlap Ratio | 0.3875 ** | 0.148 | 2.614 | 0.009 | 1.711 | |
Constant | 0.4707 ** | 0.108 | 4.364 | 0.000 | 1.000 |
Cluster | Instances (Count) | Conflicting (Ratio) | Unsuccessful (Ratio) | Obstructed (Ratio) |
---|---|---|---|---|
Prolonged Single-turn Scenario (C1) | 154 | 29.9% | 18.8% | 48.7% |
High-status Role Scenario (C2) | 96 | 27.1% | 8.3% | 35.4% |
Intense Interaction Scenario (C3) | 36 | 38.9% | 13.9% | 52.8% |
Low-activity Scenario (C4) | 136 | 5.2% | 11.8% | 17.0% |
High-competition Scenario (C5) | 69 | 40.6% | 27.5% | 68.1% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, J.; Gu, C.; Zhang, J.; Liu, Z.; Ma, B.; Konomi, S. Coordination of Speaking Opportunities in Virtual Reality: Analyzing Interaction Dynamics and Context-Aware Strategies. Appl. Sci. 2024, 14, 12071. https://doi.org/10.3390/app142412071
Chen J, Gu C, Zhang J, Liu Z, Ma B, Konomi S. Coordination of Speaking Opportunities in Virtual Reality: Analyzing Interaction Dynamics and Context-Aware Strategies. Applied Sciences. 2024; 14(24):12071. https://doi.org/10.3390/app142412071
Chicago/Turabian StyleChen, Jiadong, Chenghao Gu, Jiayi Zhang, Zhankun Liu, Boxuan Ma, and Shin‘ichi Konomi. 2024. "Coordination of Speaking Opportunities in Virtual Reality: Analyzing Interaction Dynamics and Context-Aware Strategies" Applied Sciences 14, no. 24: 12071. https://doi.org/10.3390/app142412071
APA StyleChen, J., Gu, C., Zhang, J., Liu, Z., Ma, B., & Konomi, S. (2024). Coordination of Speaking Opportunities in Virtual Reality: Analyzing Interaction Dynamics and Context-Aware Strategies. Applied Sciences, 14(24), 12071. https://doi.org/10.3390/app142412071