Next Article in Journal
Power Bounds for the Numerical Radius of the Off-Diagonal 2 × 2 Operator Matrix
Previous Article in Journal
Dealing with Stationary Sinusoidal Responses of Seven Types of Multi-Fractional Vibrators Using Multi-Fractional Phasor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on the Effects of Automatic Scaling for 3D Object Manipulation in Virtual Reality

1
Department of Computer Science, Kwangwoon University, 20, Gwangun-ro, Nowon-gu, Seoul 01897, Republic of Korea
2
School of Software, Kwangwoon University, 20, Gwangun-ro, Nowon-gu, Seoul 01897, Republic of Korea
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(9), 1198; https://doi.org/10.3390/sym16091198
Submission received: 18 July 2024 / Revised: 28 August 2024 / Accepted: 3 September 2024 / Published: 12 September 2024
(This article belongs to the Section Computer)

Abstract

:
Virtual reality offers ordinary users the ability to observe and interact with various abstract or concrete objects visualized in a three-dimensional space from different angles. Users can manipulate, transform, or reconstruct these objects similarly to how they might in a real environment. Manipulating objects in virtual reality is not as effortless as in the real world, due to the lack of sensory feedback and limited input freedom. However, it also offers new advantages that the real world cannot provide, such as the ability to easily select and control remote objects and the support of various auxiliary user interfaces. In particular, when it is necessary to alternately manipulate objects of various sizes, scaling the user’s avatar symmetrically allows for more effective manipulation than in the real world. However, manual scaling interfaces can be cumbersome and may induce dizziness. This study proposes an interaction technique that allows users to conveniently manipulate objects of various sizes without manual scale adjustment, by automatically and instantly adjusting the scale factor according to the size of the selected object and its adjacent objects. To compensate for the change in scale, we also implement a position correction mechanism that adjusts the user’s position in the virtual environment. Preliminary experiments with a small group of participants confirmed that automatic scale adjustment produces significant effects. Based on the feedback from these experiments, a more refined distance calculation method and the timing for scale adjustment were derived. In the main experiment with 14 participants, it was confirmed that the automatic scale adjustment method proposed in this study led to higher accuracy and lower discomfort in task completion compared to the conventional manual scale adjustment method. We expect that the results of this study will effectively contribute to the creation of virtual reality content that requires interaction with objects of various sizes in the future.

1. Introduction

Virtual reality is a technology that allows users to perceive a simulated virtual world in three dimensions and interact with it using their bodies, using equipment such as head-mounted displays (HMDs) and motion controllers. Today, virtual reality is increasingly being utilized in various application fields such as entertainment, healthcare, military, and architecture. The continuous decrease in the prices of related equipment and performance improvements are expected to further accelerate this trend toward wider adoption. Recently, technologies that combine the real world and the virtual world—providing mixed reality—are also rapidly advancing, which is expected to greatly expand their application range.
Most virtual reality applications require interfaces that allow users to interact with visually represented objects in a three-dimensional space. For example, in the field of architecture, interfaces are needed to select, move, or combine objects such as walls, doors, and stairs for interior design. Users can observe three-dimensional objects from various angles through the HMD, similar to the real world, and directly grasp, move, and place these objects using motion controllers. However, three-dimensional manipulation in virtual reality has several limitations compared to the real world. In reality, users can manipulate objects precisely using the high degree of freedom of their entire body, but in virtual reality, the freedom of input is limited to the movement and rotation of the headset and the motion controllers. Additionally, while users can delicately perceive contact with objects through tactile stimulation in the real world, obtaining tactile information in virtual reality is limited to the relatively low-resolution haptic feedback provided by motion controllers. These limitations make three-dimensional manipulation in virtual reality difficult, resulting in constraints on its application range.
However, virtual reality opens up various possibilities for overcoming the limitations of three-dimensional manipulation by enabling new types of interactions not permitted in the real world. Notably, in virtual reality, users can adjust the precision of manipulation by enlarging or shrinking their avatars. When manipulating small objects, the avatar’s size can be reduced for precise control, and when handling large objects, the avatar’s size can be increased to manipulate them conveniently within the field of view. One problem is that the process of manually enlarging or shrinking the avatar’s size can be cumbersome and sometimes cause dizziness. This issue can be exacerbated when dealing with objects of significantly different sizes alternately.
In this paper, we propose an interface that automatically symmetrically adjusts the avatar’s scale to easily manipulate objects of various sizes in virtual reality without the inconvenience of manually adjusting the scale. When a user selects a specific object, the scaling factor is calculated considering the sizes of the object and its surrounding objects, and the avatar’s size is adjusted accordingly to display the object at an appropriate size within the field of view. To mitigate potential dizziness, the avatar’s scale is changed instantly rather than gradually. To verify the effectiveness of this automatic scaling in practical applications, we implemented an experimental environment where users can assemble structures of a specific shape using blocks of various sizes and conducted experiments measuring the accuracy and efficiency of block assembly with multiple subjects. Through quantitative evaluation results and qualitative analysis of interviews, we confirmed that the proposed method is effective for the three-dimensional manipulation of virtual objects of various sizes in virtual reality.
The method of scaling avatars in virtual reality has been proposed multiple times in previous studies, and this paper provides a detailed explanation of these existing studies in Section 2. However, while previous research primarily introduced avatar scaling for locomotion purposes, the key difference in this paper is that we designed and validated the scaling method specifically for manipulating 3D objects of various sizes—a novel approach to our knowledge. Specifically, this paper presents two contributions to the design of the scaling method. First, we developed a sophisticated algorithm that determines the scaling factor based on the sizes of the 3D object being manipulated and its surrounding objects in virtual reality. Additionally, we introduced a repositioning method for the avatar to prevent the user’s gaze from losing track of the object being manipulated when only scaling is applied. Furthermore, in terms of validating the scaling method, our study established an experimental environment closely resembling real-world applications, providing a high degree of manipulation freedom. Within this environment, we conducted both preliminary and main studies, allowing us to comprehensively analyze the effectiveness and limitations of the proposed method and to identify future areas for improvement, which is another significant contribution of this study.
To investigate the effectiveness of an automatic scaling method for smooth manipulation of 3D objects in virtual reality, the remainder of this paper is organized as follows. Section 2 introduces related work and summarizes how the method proposed in this study is both related to and differentiated from these existing studies. Section 3 provides a detailed explanation of the automatic scaling method proposed in this research. This includes an introduction to the virtual environment and user interface used in the experiments, along with a clear presentation of the specific scaling algorithm using formulas and pseudocode. Section 4 sequentially explains the preliminary study conducted based on the proposed method, including its objectives, methodology, participants, procedures, results, and discussion. Based on the results of this preliminary study, a more optimized automatic scaling method was used to carry out the main study, which is thoroughly described in Section 5, covering its objectives, methodology, participants, procedures, results, and discussion. The results of the main study demonstrate that the proposed method leads to more accurate interactions and task completion with less mental and physical load compared to traditional methods requiring manual manipulation. Finally, Section 6 presents the conclusion of this study, discussing its limitations and suggesting directions for future research.

2. Related Work

Virtual reality technology has been extensively researched over the past 50 years, with studies on 3D interaction interfaces using VR equipment such as HMDs and gloves becoming particularly active around the 1990s [1]. A significant number of studies have focused on methods for selecting and manipulating objects in a 3D virtual world [2,3]. One of the most basic methods is commonly known as the ’simple virtual hand’ technique, which involves directly selecting and manipulating objects by mapping the user’s hand to a virtual hand on a one-to-one basis [4]. While this method is very intuitive, it has the limitation of making it difficult to interact with distant objects in a wide virtual environment.
To enable remote interaction in virtual reality, methods such as manipulating distant objects by extending the arm’s length arbitrarily [5], using a ray to select objects it intersects with [6], shrinking the virtual environment into a miniature form to manipulate objects [7], and bringing remote objects into the reach of the user’s arm by shrinking them [8] have been proposed. Additionally, these methods have been extended in various ways to handle more complex environments composed of many objects, such as bending the ray [9], using a bubble cursor to select the nearest target to the ray [10], mapping finger motions onto virtual arms to enable safe interaction in confined spaces [11], and employing multiple virtual hands, termed ‘ninja hands’, to facilitate target selection [12]. In the experimental environment of this study, the most basic ‘simple virtual hand’ method is adopted, and the selection and manipulation of distant objects are achieved by moving the avatar.
Unlike the real world, virtual reality offers the advantage of allowing users to explore and manipulate the virtual environment at a desired scale by enlarging or shrinking it as needed. In the field of virtual reality, research on interfaces that effectively explore the virtual environment using such multi-scale functionality has been conducted [13]. One branch of this research is the original WIM (Worlds in Miniature) method, which involves shrinking the virtual environment into a miniature form for manipulation. LaViola Jr. et al. proposed the Step WIM, which places the miniature of the virtual environment at the user’s feet, enabling easy navigation to desired locations in a large-scale environment through walking [14]. Wingrave et al. proposed SSWIM, which allows for various scalings and scrollings of WIM, enabling exploration and manipulation of the virtual environment at different scales, and conducted comparative experiments with WIM [15]. Pivovar et al. enhanced flexibility by adding the functionality to scale only specific areas of the WIM, similar to SSWIM [16]. Unlike WIM and its variants, which aim to enhance convenience in navigation by scaling down the virtual environment, this study seeks to improve the ease of three-dimensional manipulation of objects of various sizes by scaling the avatar up or down instead of the virtual environment.
Similar to the method proposed in this paper, studies have also been conducted on methods for exploring the virtual environment at various scales by directly enlarging or shrinking the avatar instead of using proxy objects like WIM. Krekhov et al. proposed the GulliVR method, which allows players to switch between a giant mode, where the avatar is greatly enlarged, and a normal mode, thereby enabling physical walking exploration of multi-scale virtual environments in VR games [17]. Abtahi et al. conducted experiments comparing three different methods of increasing the perceived movement speed by either enlarging the avatar or amplifying the avatar’s movement, finding that ground-level scaling was the most effective [18]. Weissker et al. proposed a method for effectively exploring multi-scale virtual environments by adjusting not only the avatar’s position but also the scale in an environment where movement is performed through teleportation instead of physical walking [19]. Some researchers have proposed methods to automatically adjust movement speed, scale factor, and stereoscopic parameters to reduce visual fatigue that can occur when dynamically adjusting scale in multi-scale virtual environments [20,21]. Unlike these studies, which primarily focus on navigation, this paper proposes a method that focuses on manipulation by automatically adjusting the avatar’s size according to the object to be handled.
Research on manipulation in multi-scale virtual environments has also been conducted in the context of collaborative efforts by multiple users, beyond individual user exploration. Zhang and Furnas proposed a multiscale collaborative virtual environment (mCVE) that allows multiple users to collaboratively understand and manage large structures containing important features at different scale levels in a desktop 3D environment [22]. Chénéchal et al. introduced a method for two or more users to collaboratively manipulate a single object at different scales in virtual reality [23]. Piumsomboon et al. presented a system where local augmented reality users and remote virtual reality users can collaborate through 360-degree video sharing and tangible interaction at different scales [24]. Drey et al. also confirmed that cooperative pair-learning in virtual reality, whether under symmetrical or asymmetrical systems, is effective from a learning perspective [25]. These studies primarily focus on multi-user collaboration, whereas this study differs in that it focuses on facilitating smooth three-dimensional manipulation for a single user.
This study is based on previous research but distinguishes itself in several key aspects. First, we focus on facilitating smooth manipulation of three-dimensional objects of various sizes by setting the scaling ratio of the avatar to reflect the size of the selected object and its surrounding objects. Additionally, rather than merely scaling the avatar up or down, we also adjust the avatar’s position to ensure that the user’s gaze remains consistently focused on the selected object. Lastly, we verified the practical effectiveness of the proposed method through both a preliminary study and a main study within a practical 3D object manipulation environment involving block assembly.

3. Automatic Scaling

3.1. Virtual Environment and User Interface

We developed a virtual reality environment for block assembly simulation using Unity 2021.3.16f1 and XR Interaction Toolkit 2.5.4 (Unity Technologies, San Francisco, CA, USA). This environment serves as a platform for our study on automatic scaling. The system consists of a virtual environment, user interface, and customizable experimental modules. Users interact with this environment through virtual avatars, experiencing an immersive first-person perspective. An inverse kinematics (IK) system translates the user’s physical movements, captured by the Oculus Quest 2 HMD and controllers, into real-time avatar actions. User interaction relies on the Oculus Quest 2 controllers for object manipulation and navigation. The grip buttons allow users to grab and move blocks, while the left controller’s thumb stick enables standard walking locomotion in any direction relative to the user’s head orientation. The right controller’s thumb stick offers additional functions: lateral movement rotates the user’s view, while forward movement activates a teleportation UI. The teleportation UI displays a curved arc with an endpoint, and releasing the thumb stick teleports the user to the selected location.
The environment also features a flying mechanism, activated by holding the right controller’s primary button. This system calculates the controller’s position relative to the HMD, determining flight direction and speed. The flight direction follows the vector from the HMD to the controller, with speed proportional to the distance between them. Extending the controller increases speed while bringing it closer and slows movement. Positioning the controller behind the HMD enables backward flight. This feature allows for free navigation in three-dimensional space, quick traversal of large areas, and access to elevated positions.
The virtual space contains various interlocking blocks of different geometric shapes, such as cubes, triangular prisms, and cylindrical prisms. Each block features one or more visually distinct, indented connection points. These indentations function similarly to omnidirectional magnets, allowing versatile attachments regardless of orientation. Users can manipulate blocks by approaching them and using the grip button or by aiming a hand ray and activating the grip button. When bringing two blocks close together, a visual cue indicates a potential connection between their indented points. Releasing the grip button at this point joins the blocks. To separate connected blocks, users can grip a block, pause briefly, and then pull it away from the structure. This design, coupled with the comprehensive control scheme, provides users with versatile navigation and manipulation options, enhancing their ability to interact efficiently with the virtual environment and construct complex structures.
For this study, we implemented an automatic scaling algorithm within the environment. When a user grasps a block, the algorithm activates, dynamically adjusting the avatar’s scale. This creates the illusion of the user’s size changing relative to the environment, while actually modifying the avatar’s dimensions. Through this approach, we investigate how automatic scaling affects user perception and interaction in virtual reality environments. Figure 1 illustrates the effect of our automatic scaling algorithm. In (a), we see the avatar in its default state interacting with a large object. In (b), automatic scaling is applied, adjusting the relative size of the avatar to facilitate easier interaction with the object.

3.2. Scaling Algorithm

The automatic scaling algorithm is a core component of our system, meticulously designed to enhance user interaction with objects of various sizes in the virtual environment. This algorithm operates by instantly adjusting the scale at the moment an object is grasped, providing an immediate adaptation to the object’s size. When a user grasps an object, the algorithm initiates a series of rapid calculations, beginning with the computation of the object’s bounding box—a virtual container that encapsulates the entire object. From this bounding box, the algorithm determines the maximum diagonal length, which serves as a crucial parameter for the scaling calculation. This approach allows for accurate size estimation regardless of the object’s orientation or shape complexity, ensuring consistent scaling across diverse object types.
Algorithm 1 presents the pseudocode for calculating the bounding box and the longest diagonal of the object(s), which are crucial for determining the object’s dimensions.
Algorithm 1 Calculate the bounding box and longest diagonal.
1:
function CalcBB(objects)                                                           ▹ CalcBB: Calculate Bounding Box
2:
     min _ point Vector 3 ( , , )
3:
     max _ point Vector 3 ( , , )
4:
    if IsEmpty ( objects ) then
5:
           return C r e a t e B o u n d ( Vector 3 ( 0 , 0 , 0 ) , Vector 3 ( 0 , 0 , 0 ) )
6:
    for each object in objects do
7:
            bound G e t O B ( object )                                                           ▹ GetOB: Get Object Bound
8:
            min _ point V e c t o r 3 M i n ( min _ point , bound . min )
9:
            max _ point V e c t o r 3 M a x ( max _ point , bound . max )
10:
     center ( min _ point + max _ point ) / 2
11:
     size max _ point min _ point
12:
     final _ bounding _ box C r e a t e B o u n d ( center , size )
13:
    return final _ bounding _ box  
14:
function CalcOD(final_bounding_box)                   ▹ CalcOD: Calculate Object Diagonal
15:
     corners G e t C o r n e r s ( final _ bounding _ box )
16:
     object _ diagonal 0
17:
    for i from 0 to Length ( corners ) 1 do
18:
           for j from i + 1 to Length ( corners ) do
19:
               length D i s t a n c e ( corners [ i ] , corners [ j ] )
20:
              if length > object _ diagonal  then
21:
                        object _ diagonal length
22:
    return object _ diagonal
23:
objects ListOf ( GameObjects )
24:
if IsEmpty ( objects ) then
25:
    return 0
26:
final _ bounding _ box C a l c B B ( objects )
27:
object _ diagonal C a l c O D ( final _ bounding _ box )
28:
return object _ diagonal
The algorithm utilizes the object’s dimensions and the camera’s field of view (FOV) to compute an ideal viewing distance. This required distance is calculated as follows:
R e q u i r e d D i s t a n c e = D 2 tan ( θ 2 )
where D is the diagonal length of the object and θ is the camera’s field of view in radians. This equation can be easily derived by applying trigonometry to the diagonal length D and the camera’s field of view angle θ . When the center of the object’s diagonal is aligned with the center of the field of view and the object fully fills the view, the distance from the camera to the object, denoted as r, can be expressed by the equation tan ( θ / 2 ) = ( D / 2 ) / r . Solving for r, we obtain r = ( D / 2 ) / tan ( θ / 2 ) , which corresponds to the R e q u i r e d D i s t a n c e in the above equation.
This required distance is then used to calculate a scaling factor by comparing it to an interaction distance, which was determined through preliminary testing in our Unity-based VR environment.
S c a l e F a c t o r = R e q u i r e d D i s t a n c e I n t e r a c t i o n D i s t a n c e
The reference interaction distance of 0.4 m was established based on initial self-testing within our specific Unity VR setup. This distance represented a comfortable arm position for object manipulation in our environment, neither too close to the body nor requiring full arm extension. It is important to note that this distance may vary depending on the VR development environment, hardware setup, and individual user preferences. We recommend that developers implementing this system conduct their own testing to determine the most suitable reference distance for their specific VR environment and user base. This approach aims to ensure that the entire object fits comfortably within the user’s field of view and interaction space, striking a balance between visibility and manipulability. However, further user studies would be beneficial to validate and potentially refine this distance across a broader range of users and VR setups.
A key feature of our algorithm is the preservation of the user’s viewpoint during scaling. As shown in Algorithm 2, we implement a position correction mechanism that adjusts the player’s position in the virtual environment to compensate for the change in scale. This is achieved by storing the initial camera and player positions before applying the new scale, calculating the difference between the initial and new camera positions after scaling, using this difference to compute a corrected player position, and applying this corrected position to the player avatar. The following equations show how we calculate the position correction and apply it to maintain the user’s viewpoint.
P o s i t i o n C o r r e c t i o n = I n i t i a l C a m e r a P o s N e w C a m e r a P o s
C o r r e c t e d P l a y e r P o s = I n i t i a l P l a y e r P o s + P o s i t i o n C o r r e c t i o n
This approach ensures that while the player’s scale changes, their perspective and relative position in the virtual environment remain consistent, preventing disorientation and maintaining the user’s sense of presence. Furthermore, our algorithm considers not only the grasped object but also surrounding objects within a predefined radius. It calculates an average scale factor based on these nearby objects and then combines this with the scale factor of the grasped object using a weighted average. This contextual scaling approach provides a more balanced and intuitive size adjustment, enhancing the user’s spatial awareness and overall immersive experience.
Specifically, in our experiment, we assigned a weight of 0.7 to w 1 for the grasped object and 0.3 to w 2 for the nearby objects. w 1 was given a higher value because accurate manipulation of the object that the user is directly interacting with requires prioritizing scaling based on that object. On the other hand, w 2 was assigned a lower value, reflecting that while surrounding objects should influence the scaling, they are considered secondary factors. These specific values of 0.7 and 0.3 were experimentally determined through multiple preliminary experiments conducted during the setup of the experimental environment, incorporating subjective feedback from participants. Therefore, these values were appropriately adjusted to fit the experimental conditions of this study. They are not absolute values and should be adjusted accordingly through preliminary experiments if the experimental environment changes.
Unlike gradual scaling approaches, our method provides an instantaneous adjustment, allowing users to quickly adapt to the new scale without transition periods. This immediate scaling contributes to a responsive and efficient interaction experience in the virtual environment, allowing users to focus on their tasks without being distracted by scaling transitions.
Algorithm 2 Calculate player scale and adjust position.
1:
function CalcRequiredDistance(D, θ )                                      ▹ D: Diagonal, θ : FOV
2:
     θ r a d i a n s C o n v e r t T o R a d i a n s ( θ )
3:
     d i s t a n c e D / ( 2 · tan ( θ r a d i a n s / 2 ) )
4:
    return d i s t a n c e
5:
function CalcScaleFactor(required_distance, fixed_distance)
6:
     s c a l e _ f a c t o r r e q u i r e d _ d i s t a n c e / f i x e d _ d i s t a n c e
7:
    return s c a l e _ f a c t o r
8:
function CalcAverageScaleFactor(objects)
9:
       t o t a l _ s c a l e 0
10:
     c o u n t L e n g t h ( o b j e c t s )
11:
    for each o b j e c t in o b j e c t s  do
12:
            o b j e c t _ d i a g o n a l C a l c O D ( o b j e c t )                                   ▹ CalcOD from Algorithm 1
13:
            r e q u i r e d _ d i s t a n c e C a l c R e q u i r e d D i s t a n c e ( o b j e c t _ d i a g o n a l , c a m e r a _ F O V )
14:
            s c a l e _ f a c t o r C a l c S c a l e F a c t o r ( r e q u i r e d _ d i s t a n c e , f i x e d _ d i s t a n c e )
15:
            t o t a l _ s c a l e t o t a l _ s c a l e + s c a l e _ f a c t o r
16:
     a v e r a g e _ s c a l e t o t a l _ s c a l e / c o u n t
17:
    return a v e r a g e _ s c a l e
18:
g r a s p e d _ o b j e c t G e t G r a s p e d O b j e c t
19:
o b j e c t _ d i a g o n a l C a l c O D ( g r a s p e d _ o b j e c t )                             ▹ CalcOD from Algorithm 1
20:
c a m e r a _ F O V G e t C a m e r a F O V
21:
f i x e d _ d i s t a n c e 0.4                    ▹ Arbitrary fixed distance, e.g., 0.4 m
22:
r e q u i r e d _ d i s t a n c e C a l c R e q u i r e d D i s t a n c e ( o b j e c t _ d i a g o n a l , c a m e r a _ F O V )
23:
g r a s p e d _ o b j e c t _ s c a l e C a l c S c a l e F a c t o r ( r e q u i r e d _ d i s t a n c e , f i x e d _ d i s t a n c e )
24:
s u r r o u n d i n g _ o b j e c t s G e t S u r r o u n d i n g O b j e c t s ( p l a y e r , r a d i u s )
25:
a v g _ s u r r o u n d i n g _ s c a l e C a l c A v e r a g e S c a l e F a c t o r ( s u r r o u n d i n g _ o b j e c t s )
26:
w 1 0.7                                                                                   ▹ Weight for grasped object
27:
w 2 0.3                                                                         ▹ Weight for surrounding objects
28:
f i n a l _ s c a l e ( g r a s p e d _ o b j e c t _ s c a l e · w 1 + a v g _ s u r r o u n d i n g _ s c a l e · w 2 ) / ( w 1 + w 2 )
29:
i n i t i a l _ c a m e r a _ p o s G e t C a m e r a P o s i t i o n
30:
i n i t i a l _ p l a y e r _ p o s G e t P l a y e r P o s i t i o n
31:
SetPlayerScale(final_scale)
32:
n e w _ c a m e r a _ p o s G e t C a m e r a P o s i t i o n
33:
p o s i t i o n _ c o r r e c t i o n i n i t i a l _ c a m e r a _ p o s n e w _ c a m e r a _ p o s
34:
c o r r e c t e d _ p l a y e r _ p o s i n i t i a l _ p l a y e r _ p o s + p o s i t i o n _ c o r r e c t i o n
35:
SetPlayerPosition(corrected_player_pos)

4. Preliminary Study

4.1. Purpose and Objectives

The preliminary study was designed to evaluate the initial implementation of our proposed automatic scaling technology and gather foundational data for the main experiment. Our primary aim was to assess the effectiveness of automatic scaling in improving user performance during virtual reality object manipulation tasks, specifically focusing on the assembly of a predefined rocket structure. We sought to compare user experiences between automatic and manual scaling conditions, hypothesizing that automatic scaling would lead to more intuitive and efficient interactions. Additionally, we aimed to identify potential areas for improvement in the algorithm, such as the responsiveness of scaling transitions and the range of object sizes effectively handled. The insights gathered from this study were crucial in determining the design and parameters of our subsequent main study, allowing us to refine our approach and experimental protocols.

4.2. Methodology

Our preliminary study employed a within-subjects design, allowing each participant to experience both automatic and manual scaling conditions. We conducted the experiment in a virtual environment developed using Unity. The participants experienced the virtual environment through an Oculus Quest 2 VR headset. The core task involved assembling a predefined rocket structure using virtual blocks under both scaling conditions. The virtual blocks were designed with specific connection points, enabling participants to construct complex structures freely. To ensure the validity of our results, we counterbalanced the order of scaling conditions across participants, mitigating potential order effects. The automatic scaling algorithm was implemented to dynamically adjust the avatar’s scale when a user grasped a block, creating the illusion of the user’s size changing relative to the environment.
We collected both quantitative and qualitative data to gain comprehensive insights. Task completion time served as our primary quantitative measure, while post-task interviews provided qualitative insights into user experiences and preferences. During the experiment, we also observed and noted participant behaviors and comments, paying particular attention to their interactions with objects of varying sizes and their navigation within the virtual space. Before each condition, participants underwent a brief training session to familiarize themselves with the VR environment and block manipulation techniques, ensuring a baseline level of competence across all participants. This training included practice with the flying feature, activated by the right controller’s primary button, which allowed for free navigation in three-dimensional space.
Figure 2 illustrates the experimental task setup. Participants were presented with a target structure (left) and were required to assemble it using the provided component objects (right). This setup was designed to test participants’ ability to manipulate objects of varying sizes and complexities, thereby highlighting the potential benefits of automatic scaling.

4.3. Participants

Our preliminary study involved six participants, consisting of five males and one female, aged between 24 and 30 years (average age 26.6 years). We aimed to include individuals with varying levels of VR experience to better understand how our automatic scaling technology might affect different user groups. The participants included two VR novices with minimal prior experience, three intermediate users who occasionally used VR, and one experienced user who regularly engaged with VR technology. This mix allowed us to explore the technology’s potential in both easing the learning curve for newcomers and enhancing the experience for more seasoned users. All participants had normal or corrected-to-normal vision and reported no issues that might interfere with VR use, such as motion sickness or physical limitations.

4.4. Procedure

The experiment began with an introduction session where we explained the study’s objectives and procedures to the participants. We described the virtual environment they would interact with, introduced the concept of automatic scaling, and outlined the tasks they would perform. After addressing the initial questions, we obtained written consent from each participant. The introduction was followed by a training session that familiarized participants with the VR environment, Oculus Quest 2 controls, and block manipulation techniques. This training included practice with basic movements, object interaction, and navigation within the virtual space, as well as the use of the flying feature.
The main experimental tasks involved constructing a predefined rocket structure using virtual blocks under both automatic and manual scaling conditions. Throughout the experiment, we recorded task completion times and observed participant behaviors and comments, focusing on their interactions with objects of varying sizes and their navigation strategies. After completing both conditions, we conducted semi-structured interviews with each participant to gather detailed qualitative feedback on their experiences, preferences, and any challenges they encountered. These interviews allowed us to gain deeper insights into the participants’ perceptions of the automatic scaling technology and its impact on their task performance and overall VR experience.

4.5. Results and Discussion

The preliminary experiment provided quantitative and qualitative data on the effectiveness of the automatic scaling technology. Quantitative analysis showed varied results in task performance under the automatic scaling condition compared to manual scaling.
Table 1 shows the completion times for each participant under both manual and automatic scaling conditions. The results indicate that the impact of automatic scaling on task completion time varied among participants. While some participants (1, 3, 4, and 5) completed the task faster with automatic scaling, others (2 and 6) took longer. On average, participants completed the block assembly tasks slightly faster with automatic scaling (M = 419 s) than with manual scaling (M = 486 s), resulting in an average time reduction of 67 s.
Structures built using automatic scaling demonstrated higher fidelity to the presented models, suggesting improved task accuracy. Although we did not quantitatively measure accuracy in this preliminary study, visual inspection indicated a noticeable improvement in the precision of assembled structures. This observation led us to include a formal accuracy measurement in our main study. These improvements in both efficiency and apparent precision suggest that automatic scaling technology can enhance performance in virtual reality object manipulation tasks, even in cases where it did not necessarily lead to faster task completion.
Qualitative data from participant interviews and observations provided insights into the user experience of automatic scaling technology. Most participants reported positive experiences, noting improved visibility, ease of object manipulation, and enhanced user-friendliness as advantages. These benefits were particularly evident for VR novices, suggesting that the technology could potentially lower entry barriers for new users in virtual reality environments.
However, the experiential benefits were not uniform across all participants. The preliminary study implemented gradual scaling transitions, but this feature unexpectedly caused discomfort for some participants, particularly during rapid scale changes. This finding was crucial in informing the design of the main study, where gradual scaling was subsequently removed to mitigate potential motion sickness issues.
This study also identified areas for improvement through participant feedback. These included instances of unintended scaling, particularly when manipulating objects near the edges of reach, and a lack of clear visual feedback during the scaling process. These insights were valuable for refining the automatic scaling algorithm and improving the overall user experience.
In summary, the preliminary experiment results demonstrated the potential of automatic scaling technology to enhance object manipulation experiences in VR environments while also highlighting the need for fine-tuning to accommodate individual differences and preferences. These findings informed the design of the main experimental study, including adjustments to scaling parameters and the implementation of additional user control options, with a particular focus on addressing the motion sickness issues observed in the preliminary study.

5. Main Study

5.1. Purpose and Objectives

The main study was designed to build upon the findings of the preliminary experiment and to provide a more comprehensive evaluation of the automatic scaling technology. The primary objectives were to assess the effectiveness of the refined automatic scaling algorithm in improving user performance and experience in virtual reality object manipulation tasks. Additionally, we aimed to compare the user experience between automatic and manual scaling conditions using the same experimental setting as the preliminary study, but with significant improvements to the algorithm based on initial findings.
Based on the insights gained from the preliminary study, we implemented two major refinements to the automatic scaling algorithm. First, the gradual scaling transition was removed to address the motion sickness issues reported by some participants in the preliminary study. Second, and more importantly, we introduced a camera position adjustment mechanism that maintains the user’s viewpoint during scaling. In the preliminary study, scaling caused the user’s perspective to shift upwards, giving a direct sense of growing larger. However, this sometimes interfered with the user’s intended manipulations. In the main study, we implemented a correction that keeps the user’s viewpoint stable during scaling, preserving their intended interactions with objects regardless of scale changes.

5.2. Methodology

This study employed a within-subjects design to evaluate the refined automatic scaling algorithm, building upon the framework established in the preliminary experiment. We maintained consistency in the base platform by utilizing the same Unity-based virtual environment and Oculus Quest 2 VR headset as in the preliminary study. The key methodological difference was the implementation of instantaneous scaling with viewpoint preservation in the automatic scaling condition. This modification aimed to address the motion sickness issues reported earlier while maintaining users’ intended interactions during scaling events.
Participants completed block assembly tasks under both automatic and manual scaling conditions. To mitigate order effects, we counterbalanced the order of conditions across participants. This ensured that observed performance differences could be more confidently attributed to the scaling mechanism rather than the order of exposure. Before each condition, participants underwent a brief training session to familiarize themselves with the respective scaling mechanism, aiming to reduce the learning curve effect.
Data collection involved both quantitative and qualitative measures for a comprehensive evaluation of the refined algorithm. Quantitative data included task completion times and accuracy of assembled structures. We gathered qualitative data through post-task questionnaires and semi-structured interviews, focusing on user experience, perceived ease of use, and overall satisfaction with each scaling condition. Standardized questionnaires, including the Virtual Reality Sickness Questionnaire (VRSQ) and NASA Task Load Index (NASA-TLX), were employed to assess the impact of our refined scaling algorithm on user comfort and cognitive load. These measures allowed for comparability with existing literature and provided insights into the potential broader applications of automatic scaling technology.

5.3. Participants

The study recruited 14 participants (7 males, 7 females) with ages ranging from 20 to 30 years (M = 24.64, SD = 3.22). All participants possessed normal or corrected-to-normal vision, a prerequisite for effective interaction within the virtual environment. Exclusion criteria included any self-reported restrictions on VR device usage to mitigate potential confounding variables related to physical limitations. Informed consent was obtained from all participants prior to the experiment, in accordance with the ethical guidelines approved by the Institutional Review Board. The consent process included a comprehensive briefing on the study’s nature, potential risks, and participants’ rights, including the option to withdraw without penalty. To maintain experimental integrity and minimize response bias, participants were not informed of the specific hypotheses under investigation. This study design and participant selection process aimed to ensure a balanced sample and control for extraneous variables that could influence the assessment of the automatic scaling algorithm’s efficacy.

5.4. Procedure

The experimental procedure was meticulously designed to evaluate the efficacy of automatic scaling in virtual reality object manipulation tasks. Upon arrival, participants were briefed on the study’s objectives and the virtual environment. This session included a detailed explanation of both automatic and manual scaling mechanisms. Following the introduction, participants engaged in a comprehensive training session to familiarize themselves with the Oculus Quest 2 hardware and the virtual environment, covering basic movement techniques, object interaction methods, and navigation strategies (Figure 3).
The main experimental phase consisted of block assembly tasks under both scaling conditions. Throughout the tasks, we recorded completion times based on participants’ subjective judgment of task completion. Participants were instructed to verbally indicate when they believed they had finished the assembly task, and this point was marked as the completion time. We observed participants’ behaviors throughout the process. Task accuracy was evaluated by comparing completed structures against predefined models. This comparison was conducted by assessing the position of each block in the completed structure against its corresponding position in the predefined model. Starting from a perfect score of 100, points were deducted for each block that was incorrectly positioned or missing in the completed structure relative to the predefined model. This method provided a quantitative measure of task accuracy, reflecting how closely the participant’s completed structure matched the intended design. As a post-task, participants completed the VRSQ, NASA-TLX, and a custom questionnaire. The experiment concluded with semi-structured interviews to gather qualitative feedback on participants’ experiences and perceptions of the scaling technologies.

5.5. Results and Discussion

Analysis of the experimental data revealed nuanced differences between the automatic and manual scaling conditions across several key metrics. These differences were observed in task accuracy, perceived workload, user satisfaction, and to a lesser extent, in virtual reality sickness. Interestingly, task completion times did not show significant differences between conditions. Table 2 summarizes the quantitative results comparing automatic and manual scaling conditions across various metrics.
As shown in Table 2, task completion times did not significantly differ between conditions (p = 0.497). This lack of significant difference may be attributed to limitations in the experimental design, particularly the ambiguity in the criteria for task completion. Despite similar completion times, task accuracy was significantly higher in the automatic scaling condition (97.69% vs. 84.03%, p < 0.01), suggesting that participants achieved higher quality work without sacrificing speed.
To assess the impact of automatic scaling on user experience in VR environments, we employed the NASA Task Load Index (NASA-TLX), a widely recognized tool for measuring perceived workload [26]. The NASA-TLX evaluates six distinct dimensions of workload, providing a comprehensive understanding of the user’s experience. Mental demand assesses the level of cognitive activity required, such as thinking, deciding, calculating, or remembering. Physical demand measures the amount of physical activity needed, including actions like pushing, pulling, or manipulating objects in the virtual space. Temporal demand evaluates the time pressure felt during the task, reflecting how hurried or rushed the participant felt. The Performance dimension allows participants to rate their perceived success in accomplishing the assigned tasks, with higher scores indicating better self-assessed performance. Effort gauges the combined mental and physical exertion required to achieve the task, while frustration measures the level of discouragement, irritation, stress, or annoyance experienced during the task.
In our study, these dimensions were particularly relevant in assessing how automatic scaling affected the participants’ interaction with virtual objects. For instance, we anticipated that automatic scaling might reduce mental demand by simplifying object manipulation, and lower physical demand by optimizing the scale of interaction. The performance dimension was crucial in understanding how automatic scaling influenced participants’ perceived effectiveness in completing tasks.
Figure 4 presents a comprehensive overview of the NASA-TLX scores across different dimensions for both manual and automatic scaling conditions. The data reveals that automatic scaling demonstrated improvements across all categories, suggesting a broad-ranging positive impact on user experience. Notably, the most substantial differences were observed in physical demand (3.64 vs. 1.93) and Frustration (3.86 vs. 2.07), indicating that automatic scaling significantly alleviated the physical strain and emotional stress associated with object manipulation in VR environments. The performance score, where higher values indicate better-perceived performance, was markedly higher for automatic scaling (8.71) compared to manual scaling (6.36). This considerable difference suggests that participants not only found tasks easier to complete with automatic scaling but also perceived their performance to be of higher quality.
Mental demand and effort also showed notable reductions with automatic scaling, further supporting the hypothesis that this technique effectively reduces the cognitive load on users. The difference in temporal demand (2.71 vs. 1.93), while still favoring automatic scaling, was less pronounced compared to other dimensions. This smaller difference in temporal demand suggests that while automatic scaling did alleviate time pressure to some extent, its impact on perceived time constraints was less substantial than its effects on other aspects of user experience.
The NASA-TLX results indicate that automatic scaling contributes to a reduction in perceived workload across multiple aspects of user experience in VR environments. Improvements observed across all categories suggest that the automatic scaling technique has the potential to enhance various facets of user interaction in virtual reality. These findings support the effectiveness of automatic scaling and highlight areas for future research and development in VR interface design, particularly for applications involving precise object manipulation and extended use periods.
In addition to workload assessment, we utilized the Virtual Reality Sickness Questionnaire (VRSQ) and a custom questionnaire to evaluate potential discomfort and various aspects of user experience associated with our VR system [27]. The VRSQ scores showed a trend toward reduced discomfort with automatic scaling (3.36) compared to manual scaling (6.86). However, this difference did not reach statistical significance (p = 0.076). These results suggest that automatic scaling may have the potential to improve comfort in VR interactions, but further investigation with larger sample sizes or extended exposure times is necessary to draw definitive conclusions about its impact on VR sickness symptoms.
The custom questionnaire, designed to assess multiple dimensions of the user experience, comprised eight key aspects: work concentration, the naturalness of interaction, technical issues, fatigue reduction, scaling appropriateness, ease of task performance, overall user experience satisfaction, and likelihood of recommendation. Each dimension was evaluated on a 10-point Likert scale, resulting in a maximum possible score of 80 points. The results indicated a strong preference for the automatic scaling condition (73.50 out of 80) over the manual scaling condition (49.79 out of 80), with a statistically significant difference (p < 0.001). This substantial difference suggests that automatic scaling significantly enhanced various aspects of the user experience.
Participants reported higher levels of work concentration and found interactions more natural with automatic scaling. They also experienced fewer technical issues and reported reduced fatigue compared to the manual scaling condition. The appropriateness of scaling was consistently rated higher in the automatic condition, indicating that the system effectively adjusted virtual object sizes to facilitate interaction. The ease of task performance was also notably higher with automatic scaling, aligning with the improved accuracy observed in the quantitative data.
The cumulative findings indicate that automatic scaling yields substantial improvements in object manipulation accuracy, diminishes cognitive burden, and augments overall user satisfaction within virtual reality environments. The discrepancy between completion times and task accuracy highlights the importance of considering multiple performance metrics in evaluating VR interaction techniques. Furthermore, the comprehensive assessment provided by the custom questionnaire offers valuable insights into the multifaceted impact of automatic scaling on the user experience in VR environments.
Qualitative analysis of the interview data revealed several key themes regarding participants’ experiences with automatic scaling in virtual reality. The primary themes that emerged were enhanced object manipulation, improved task performance, reduced physical strain, and overall positive user experience. Participants consistently reported that automatic scaling facilitated easier and quicker assembly of virtual objects. Many noted the intuitive nature of the feature, emphasizing its contribution to a positive VR experience through enhanced virtuality and interactivity. This finding aligns with the quantitative results, particularly the improved task accuracy observed in the automatic scaling condition. Furthermore, participants frequently compared automatic scaling favorably to manual scaling, noting the natural feel of automatic scaling and the difficulties associated with manual adjustments.
Improvements in task performance were reported by several participants, with better visibility and easier manipulation of objects cited as key factors. Participants noted that automatic scaling allowed for a comprehensive view of the entire object within their field of vision, facilitating more accurate assembly. This corroborates the quantitative findings of improved task accuracy in the automatic scaling condition. Reduced physical strain was another significant theme that emerged from the interviews. Participants reported decreased wrist strain and improved ease of object rotation with automatic scaling. They noted that the appropriate reduction in object size led to less physical exertion during manipulation tasks and enhanced precision in object rotation. This aligns with the lower NASA-TLX scores observed in the quantitative analysis, indicating reduced perceived workload in the automatic scaling condition.
Overall, participants reported a highly positive user experience with automatic scaling. Many described the feature as natural and seamless, noting that the scaling process often occurred without conscious awareness. Participants emphasized that the work environment improved without inducing discomfort. This positive experience is reflected in the lower VRSQ scores for the automatic scaling condition, indicating reduced VR sickness symptoms. Despite the overwhelmingly positive feedback, some participants suggested potential improvements. Recommendations included the ability to make minor manual adjustments post-automatic scaling and the addition of visual indicators for scale levels. These suggestions provide valuable direction for future refinements of the automatic scaling technology, highlighting the importance of balancing automation with user control in VR interfaces.

6. Conclusions

We have presented an interface that automatically adjusts the avatar’s scale to facilitate the manipulation of objects of various sizes in virtual reality. When a user selects a specific object in the proposed interface, the avatar’s size increases or decreases to easily manipulate the selected object within the field of view, considering the sizes of the selected object and its surrounding objects. Through a preliminary study with a small group, we confirmed the potential of this automatic scaling and simultaneously identified issues with gradual scaling and avatar viewpoint movement. In the main study involving a larger number of participants, based on an interface that addresses these issues, we found that the proposed automatic scaling interface was generally much more effective compared to manual scaling.
The quantitative results demonstrated that the automatic scaling interface allowed for more accurate interaction and task completion with less mental and physical loads than with the manual scaling interface. The qualitative results support and expand upon the quantitative findings, providing deeper insights into the user experience of automatic scaling in virtual reality environments. The results strongly suggest that automatic scaling enhances object manipulation, improves task performance, reduces physical strain, and contributes to a more comfortable and intuitive VR experience. These findings not only validate the effectiveness of the automatic scaling approach but also provide valuable insights for future developments in VR interface design, pointing toward a more user-centric and adaptive virtual reality experience.
Future research should focus on implementing more objective measures of task completion and establishing clear, standardized criteria for determining when a task is truly finished. This refinement in methodology would allow for a more precise evaluation of how automatic scaling affects various aspects of task performance in virtual environments, including potential impacts on completion speed. Additionally, exploring the effects of automatic scaling across a wider range of tasks and virtual environments could provide broader insights into its applicability and benefits in different VR scenarios.

Author Contributions

Conceptualization, K.H.L.; methodology, K.H.L. and D.L.; software, D.L. and S.H.; validation, K.H.L. and D.L.; formal analysis, D.L.; investigation, D.L.; resources, K.H.L.; data curation, D.L.; writing—original draft preparation, K.H.L. and D.L.; writing—review and editing, K.H.L. and D.L.; visualization, D.L.; supervision, K.H.L.; project administration, K.H.L.; funding acquisition, K.H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (no. NRF-2021R1F1A1046373).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of Kwangwoon University (7001546-202400131-HR(SB)-001-04).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data are available upon request due to privacy or other restrictions.

Acknowledgments

The present research has been conducted by the Excellent researcher support project of Kwangwoon University in 2023.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. LaViola, J.J., Jr.; Kruijff, E.; McMahan, R.P.; Bowman, D.A.; Poupyrev, I.P. 3D User Interfaces: Theory and Practice, 2nd ed.; Addison-Wesley Professional: Boston, MA, USA, 2017; 400p. [Google Scholar]
  2. Argelaguet, F.; Andujar, C. A Survey of 3D Object Selection Techniques for Virtual Environments. Comput. Graph. 2013, 37, 121–136. [Google Scholar] [CrossRef]
  3. Mendes, D.; Caputo, F.M.; Giachetti, A.; Ferreira, A.; Jorge, J. A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments. Comput. Graph. Forum 2019, 38, 21–45. [Google Scholar] [CrossRef]
  4. Poupyrev, I.; Weghorst, S.; Billinghurst, M.; Ichikawa, T. Egocentric Object Manipulation in Virtual Environments: Empirical Evaluation of Interaction Techniques. Comput. Graph. Forum 1998, 17, 41–52. [Google Scholar] [CrossRef]
  5. Poupyrev, I.; Billinghurst, M.; Weghorst, S.; Ichikawa, T. The Go-Go Interaction Technique: Non-linear Mapping for Direct Manipulation in VR. In Proceedings of the 9th Annual ACM Symposium on User Interface Software and Technology, UIST’96, Seattle, WA, USA, 6–8 November 1996; ACM: New York, NY, USA, 1996; pp. 79–80. [Google Scholar]
  6. Bowman, D.A.; Hodges, L.F. An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in Immersive Virtual Environments. In Proceedings of the 1997 Symposium on Interactive 3D Graphics, I3D’97, Providence, RI, USA, 27–30 April 1997; ACM: New York, NY, USA, 1997; pp. 35–38. [Google Scholar]
  7. Stoakley, R.; Conway, M.J.; Pausch, R. Virtual Reality on a WIM: Interactive Worlds in Miniature. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI’95, Denver, CO, USA, 7–11 May 1995; ACM Press/Addison-Wesley Publishing Co.: New York, NY, USA, 1995; pp. 265–272. [Google Scholar]
  8. Mine, M.R.; Brooks, F.P., Jr.; Sequin, C.H. Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction. In Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH’97, Los Angeles, CA, USA, 3–8 August 1997; ACM Press/Addison-Wesley Publishing Co.: New York, NY, USA, 1997; pp. 19–26. [Google Scholar]
  9. Steinicke, F.; Ropinski, T.; Hinrichs, K. Object Selection in Virtual Environments Using an Improved Virtual Pointer Metaphor. In Proceedings of the Computer Vision and Graphics: International Conference, ICCVG 2004, Warsaw, Poland, 22–24 September 2004; Springer: Berlin/Heidelberg, Germany, 2006; pp. 320–326. [Google Scholar]
  10. Lu, Y.; Yu, C.; Shi, Y. Investigating Bubble Mechanism for Ray-Casting to Improve 3D Target Acquisition in Virtual Reality. In Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Atlanta, GA, USA, 22–26 March 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 35–43. [Google Scholar]
  11. Tseng, W.-J.; Huron, S.; Lecolinet, E.; Gugenheimer, J. FingerMapper: Mapping Finger Motions onto Virtual Arms to Enable Safe Virtual Reality Interaction in Confined Spaces. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI’23, Hamburg, Germany,, 23–29 April 2023; Association for Computing Machinery: New York, NY, USA, 2023; pp. 1–14. [Google Scholar]
  12. Schjerlund, J.; Hornbæk, K.; Bergström, J. Ninja Hands: Using Many Hands to Improve Target Selection in VR. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI’21, Yokohama, Japan, 8–13 May 2021; Association for Computing Machinery: New York, NY, USA, 2021; pp. 1–14. [Google Scholar]
  13. Kopper, R.; Ni, T.; Bowman, D.A.; Pinho, M. Design and Evaluation of Navigation Techniques for Multiscale Virtual Environments. In Proceedings of the IEEE Virtual Reality Conference (VR’06), Alexandria, VA, USA, 25–29 March 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 175–182. [Google Scholar]
  14. LaViola, J.J.; Feliz, D.A.; Keefe, D.F.; Zeleznik, R.C. Hands-Free Multi-Scale Navigation in Virtual Environments. In Proceedings of the 2001 Symposium on Interactive 3D Graphics (I3D’01), Chapel Hill, NC, USA, 26–29 March 2001; ACM: New York, NY, USA, 2001; pp. 9–15. [Google Scholar]
  15. Wingrave, C.A.; Haciahmetoglu, Y.; Bowman, D.A. Overcoming World in Miniature Limitations by a Scaled and Scrolling WIM. In Proceedings of the IEEE Symposium on 3D User Interfaces (3DUI’06), Alexandria, VA, USA, 25–26 March 2006; IEEE: Alexandria, VA, USA, 2006; pp. 11–16. [Google Scholar]
  16. Pivovar, J.; DeGuzman, J.; Suma, R.E. Virtual Reality on a SWIM: Scalable World in Miniature. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Christchurch, New Zealand, 12–16 March 2022; IEEE: Christchurch, New Zealand, 2022; pp. 912–913. [Google Scholar]
  17. Krekhov, A.; Cmentowski, S.; Emmerich, K.; Masuch, M.; Krüger, J. GulliVR: A Walking-Oriented Technique for Navigation in Virtual Reality Games Based on Virtual Body Resizing. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY’18), Melbourne, VIC, Australia, 28–31 October 2018; ACM: New York, NY, USA, 2018; pp. 243–256. [Google Scholar]
  18. Abtahi, P.; Gonzalez-Franco, M.; Ofek, E.; Steed, A. I’m a Giant: Walking in Large Virtual Environments at High Speed Gains. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19), Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. Paper 522, 13p. [Google Scholar]
  19. Weissker, T.; Franzgrote, M.; Kuhlen, T. Try This for Size: Multi-Scale Teleportation in Immersive Virtual Reality. IEEE Trans. Vis. Comput. Graph. 2024, 30, 2298–2308. [Google Scholar] [CrossRef] [PubMed]
  20. Cho, I.; Li, J.; Wartell, Z. Evaluating Dynamic-Adjustment of Stereo View Parameters in a Multi-Scale Virtual Environment. In Proceedings of the 2014 IEEE Symposium on 3D User Interfaces (3DUI), Minneapolis, MN, USA, 29–30 March 2014; IEEE: Minneapolis, MN, USA, 2014; pp. 91–98. [Google Scholar]
  21. Argelaguet, F.; Maignant, M. GiAnt: Stereoscopic-Compliant Multi-Scale Navigation in VEs. In Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology (VRST’16), Munich, Germany, 2–4 November 2016; ACM: New York, NY, USA, 2016; pp. 269–277. [Google Scholar]
  22. Zhang, X.; Furnas, G.W. mCVEs: Using Cross-Scale Collaboration to Support User Interaction with Multiscale Structures. Presence 2005, 14, 31–46. [Google Scholar] [CrossRef]
  23. Le Chénéchal, M.; Lacoche, J.; Royan, J.; Duval, T.; Gouranton, V.; Arnaldi, B. When the Giant meets the Ant: An Asymmetric Approach for Collaborative and Concurrent Object Manipulation in a Multi-Scale Environment. In Proceedings of the 2016 IEEE Third VR International Workshop on Collaborative Virtual Environments (3DCVE), Greenville, SC, USA, 20 March 2016; IEEE: Greenville, SC, USA, 2016; pp. 18–22. [Google Scholar]
  24. Piumsomboon, T.; Lee, G.A.; Irlitti, A.; Ens, B.; Thomas, B.H.; Billinghurst, M. On the Shoulder of the Giant: A Multi-Scale Mixed Reality Collaboration with 360 Video Sharing and Tangible Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19), Glasgow, UK, 4–9 May 2019; ACM: New York, NY, USA, 2019. Paper 228, 17p. [Google Scholar]
  25. Drey, T.; Albus, P.; der Kinderen, S.; Milo, M.; Segschneider, T.; Chanzab, L.; Rietzier, M.; Seufert, T.; Rukzio, E. Towards Collaborative Learning in Virtual Reality: A Comparison of Co-Located Symmetric and Asymmetric Pair-Learning. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI’22), New Orleans, LA, USA, 29 April–5 May 2022; ACM: New York, NY, USA, 2022. Paper 610, 19p. [Google Scholar]
  26. Hart, S.G. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 904–908. [Google Scholar] [CrossRef]
  27. Kim, H.K.; Park, J.; Choi, Y.; Choe, M. Virtual Reality Sickness Questionnaire (VRSQ): Motion Sickness Measurement Index in a Virtual Reality Environment. Appl. Ergon. 2018, 69, 66–73. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Demonstration of the automatic scaling process in the virtual environment. (a) Top-left: Avatar in its default state. (b) Top-right: User’s perspective when grasping a large object without scaling, showing limited visibility and object truncation. (c) Bottom-left: Avatar after automatic scaling has been applied. (d) Bottom-right: User’s perspective after scaling, demonstrating improved visibility and complete object view within the field of vision.
Figure 1. Demonstration of the automatic scaling process in the virtual environment. (a) Top-left: Avatar in its default state. (b) Top-right: User’s perspective when grasping a large object without scaling, showing limited visibility and object truncation. (c) Bottom-left: Avatar after automatic scaling has been applied. (d) Bottom-right: User’s perspective after scaling, demonstrating improved visibility and complete object view within the field of vision.
Symmetry 16 01198 g001
Figure 2. Experimental task setup used in both preliminary and main studies. (Left) The target structure that participants were required to build. (Right) The component objects provided to participants for assembly are arranged on the virtual floor.
Figure 2. Experimental task setup used in both preliminary and main studies. (Left) The target structure that participants were required to build. (Right) The component objects provided to participants for assembly are arranged on the virtual floor.
Symmetry 16 01198 g002
Figure 3. A participant engaging in the main experimental task. The subject is wearing an Oculus Quest 2 VR headset and using motion controllers to interact with virtual objects. The experimental setup allows for natural movement and interaction within the virtual environment, facilitating the comparison between automatic and manual scaling conditions.
Figure 3. A participant engaging in the main experimental task. The subject is wearing an Oculus Quest 2 VR headset and using motion controllers to interact with virtual objects. The experimental setup allows for natural movement and interaction within the virtual environment, facilitating the comparison between automatic and manual scaling conditions.
Symmetry 16 01198 g003
Figure 4. Comparison of NASA-TLX scores between manual and automatic scaling conditions.
Figure 4. Comparison of NASA-TLX scores between manual and automatic scaling conditions.
Symmetry 16 01198 g004
Table 1. Comparison of task completion times between manual and automatic scaling in a VR environment.
Table 1. Comparison of task completion times between manual and automatic scaling in a VR environment.
ParticipantStateTime (min: s )Difference (s)
1Manual4:51−21
Automatic4:30
2Manual6:06+121
Automatic8:07
3Manual11:00−248
Automatic6:52
4Manual8:41−246
Automatic4:35
5Manual8:40−52
Automatic7:48
6Manual9:16+45
Automatic10:01
Table 2. Comparison of manual and automatic scaling across various metrics.
Table 2. Comparison of manual and automatic scaling across various metrics.
MetricManual ScalingAutomatic Scalingt(12)p-Value
VRSQ6.86 (SD = 7.40)3.36 (SD = 5.15)1.930.076
NASA-TLX23.36 (SD = 13.14)14.43 (SD = 6.35)2.370.034
Questionnaire49.79 (SD = 18.03)73.50 (SD = 5.77)−4.550.0005
Completion Time430.07 s (SD = 205.86)477.86 s (SD = 216.15)-0.700.497
Accuracy84.03% (SD = 13.38)97.69% (SD = 3.83)−3.530.0037
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lee, D.; Han, S.; Lee, K.H. A Study on the Effects of Automatic Scaling for 3D Object Manipulation in Virtual Reality. Symmetry 2024, 16, 1198. https://doi.org/10.3390/sym16091198

AMA Style

Lee D, Han S, Lee KH. A Study on the Effects of Automatic Scaling for 3D Object Manipulation in Virtual Reality. Symmetry. 2024; 16(9):1198. https://doi.org/10.3390/sym16091198

Chicago/Turabian Style

Lee, Dongkeun, Seowon Han, and Kang Hoon Lee. 2024. "A Study on the Effects of Automatic Scaling for 3D Object Manipulation in Virtual Reality" Symmetry 16, no. 9: 1198. https://doi.org/10.3390/sym16091198

APA Style

Lee, D., Han, S., & Lee, K. H. (2024). A Study on the Effects of Automatic Scaling for 3D Object Manipulation in Virtual Reality. Symmetry, 16(9), 1198. https://doi.org/10.3390/sym16091198

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop