Next Article in Journal
Synergistic Effects of Total Ionizing Dose and Single-Event Upset in 130 nm 7T Silicon-on-Insulator Static Random Access Memory
Previous Article in Journal
An Algorithm for Initial Localization of Feature Waveforms Based on Differential Analysis Parameter Setting and Its Application in Clinical Electrocardiograms
Previous Article in Special Issue
QoE-Based Performance Comparison of AVC, HEVC, and VP9 on Mobile Devices with Additional Influencing Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality of Experience That Matters in Gaming Graphics: How to Blend Image Processing and Virtual Reality

by
Awais Khan Jumani
1,*,
Jinglun Shi
1,
Asif Ali Laghari
2,
Vania V. Estrela
3,
Gabriel Avelino Sampedro
4,5,
Ahmad Almadhor
6,
Natalia Kryvinska
7 and
Aftab ul Nabi
1
1
School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510000, China
2
Software College, Shenyang Normal University, Shenyang 110000, China
3
Telecommunications Department, Federal Fluminense University (UFF), Niterói 24020-141, RJ, Brazil
4
Faculty of Information and Communication Studies, University of the Philippines Open University, Los Baños 4031, Philippines
5
Center for Computational Imaging and Visual Innovations, De La Salle University, 2401 Taft Ave., Malate, Manila 1004, Philippines
6
Department of Computer Engineering and Networks, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
7
Department of Information Management and Business Systems, Faculty of Management, Comenius University Bratislava, Odbojárov 10, 82005 Bratislava, Slovakia
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(15), 2998; https://doi.org/10.3390/electronics13152998 (registering DOI)
Submission received: 14 June 2024 / Revised: 20 July 2024 / Accepted: 26 July 2024 / Published: 30 July 2024

Abstract

:
This paper investigates virtual reality (VR) technology which can increase the quality of experience (QoE) on the graphics quality within the gaming environment. The graphics quality affects the VR environment and user experience. To gather relevant data, we conduct a live user experience and compare games with high- and low-quality graphics. The qualitative feedback obtained through questionnaires prove the importance ofcontextualizing users’ experiences playing both games. Furthermore, our findings prove the crucial role of graphics quality in adopting user engagement and enjoyment during gaming sessions. Users consistently reported their feeling more connected when interacting with games and receiving high-quality graphics. If the graphics quality received is low, the user rating for a particular game is low. Further examination of VR technology reveals its potential to revolutionize graphics quality within game play.

1. Introduction

Gaming has grown into an influential entertainment industry that attracts numerous people worldwide. Ongoing technological advancements push its rapid development [1]. Various factors affect the graphics quality of games, and the most important factor is game entertainment [2]. High-quality gaming graphics increase QoE, and they provide high ratings to games and service providers, which benefit cloud gaming service providers. They become popular, and more game players are attracted to purchase gaming services, which causes an increase in revenue [3,4,5]. During its beginnings, early video games like Pong and Space Invaders showcased basic graphics comprising simple shapes and colors projected against two-dimensional screens [6,7]. However, alongside technological progress came revolutionary enhancements in game immersion [8]. One of the major advances in gaming graphics came in the 1990s with the introduction of 3D graphics. In games, 3D graphics allow players a more immersive and realistic experience [9]. It was achieved using a mathematical model to represent the three-dimensional world, which was then displayed on a two-dimensional screen to create the illusion of depth [10]. In the early 2000s, a new technique called ray tracing, which simulates how light interacts with objects in the real world, was introduced. Tracing the path of light rays from a source to an object and calculating its reflection, refraction, and absorption ray tracing can produce highly realistic and visually appealing pictures. The combination of 3D graphics and ray tracing has resulted in modern games featuring stunningly realistic visuals that can be enjoyed by players of all ages [11]. High-quality graphics can give game developers a competitive advantage in today’s gaming industry. Players are often drawn to games with better graphics.
The gaming industry is constantly evolving, with new technologies emerging regularly [12]. Game developers who can stay ahead of the curve and incorporate the latest advancements in graphics technology into their games have a higher chance of succeeding in the market [13]. Unfortunately, many mobile VR game developers do not prioritize graphics quality. They may wrongly believe graphics are less critical in VR games than traditional ones or need more resources to create high-quality visuals [14]. It is necessary to investigate factors that impact the graphics quality when clients access games from remote gaming clouds [15].
To investigate the factors that impact the graphics quality of games, we conducted experiments on two mobile VR games: low graphics quality and high graphics quality. We enlisted 46 participants and collected their user experience data. Some were experienced with mobile VR games, while others were new to the technology. We tested both games on two different network platforms: wireless fidelity (Wi-Fi) and mobile data. Following the testing phase, we gathered user feedback using a QoE survey. The survey asked participants how much they enjoyed each game and how genuine they found the graphics. The results showed that participants had a significantly higher QoE for the game with high graphics quality. They reported enjoying the game more and finding the graphics perfect. They also reported less sickness and eye strain when playing the game with high graphics quality. It has been found that network platforms can affect the QoE of mobile VR games. Participants reported a lower QoE when playing the game with high graphics quality over mobile data than over Wi-Fi because mobile data connections are often less reliable and can experience more latency, leading to stuttering and lag.
The main contribution of this paper is to study and explore the following:
  • QoE assessment with parameters such as image blur, video resolution, graphics quality, and the buffering of online VR games played using the mobile app.
  • Subjective QoE of the users was collected when they played high-quality and low-quality graphics games with the VR box using Wi-Fi and 5G networks. The main purpose of the high and low quality is to identify the importance and user engagement on the particular cloud.
This paper is based on six sections: Section 2 is based on the background of the study. Section 3 provides a methodology for the experiment setup and data collection. Results and discussion are given in Section 4, and Section 5 details the proposed framework for delivering high-quality graphics games in the future. Finally, the paper is concluded in Section 6.

2. Background of the Study

Blending image processing and VR can enhance the VR visual quality and experience. Image processing techniques can enhance the visual quality of VR by improving the resolution and color accuracy and reducing motion sickness [16,17]. Here are some ways to blend image processing and VR.

2.1. Super-Resolution Techniques

The resolution of visuals seen in VR may be increased with the use of methods known as super-resolution. The resolution of low-resolution photos may be improved using these methods by using algorithms, and the results are not damaged by noise or distortion. One such technique is the deep learning-based approach, which uses convolutional neural networks (CNNs) to reconstruct high-resolution images from low-resolution ones [18,19]. Learning a mapping between poor-quality and high-quality picture pairings is the primary objective of the CNN-based super-resolution based on this concept [20]. It is accomplished by training a deep neural network on a huge dataset of photos of varying resolutions, including some with very low resolutions. After training, the network may be used to rebuild high-resolution pictures from low-quality ones in real-time after deployment [21,22].

2.2. Color Correction

Color correction techniques can be used to enhance the color accuracy of images in VR. These techniques adjust the color balance and tone of images to make them more realistic and visually appealing. One such technique is color grading, which involves adjusting the color and tone of images to achieve a specific look or mood [23]. Color grading can be performed manually or using automated algorithms. Automated algorithms use machine learning to learn the color grading style of a particular film or game and apply it to new images. It can save time and effort in color grading [24].

2.3. Motion Sickness Reduction

Motion sickness is a common problem in VR. Image processing techniques can reduce motion sickness by smoothing out the motion of objects in the VR environment. Motion blur can be achieved, which adds a blur effect to moving objects in the VR environment. Motion blur can be implemented using various techniques, such as post-processing or real-time rendering [25,26]. Post-processing motion blur involves applying a blur effect to the final image after it has been rendered. The real-time rendering of motion blur involves rendering multiple images with different camera positions and blending them to create a motion blur effect [27,28].

2.4. Stereoscopic Image Processing

Stereoscopic image processing can create the best VR experience by adding depth to images. It can be achieved using depth mapping techniques, which involve creating a depth map of the scene and using it to render stereoscopic images. Depth mapping techniques use various algorithms, such as structured light or time-of-flight, to create a depth map of the scene. Once the depth map is created, it can render stereoscopic images in real-time [29].
Beheiry [30] discussed the potential of VR in scientific research. In contrast, VR has become affordable and accessible. The authors questioned its utility in research beyond the visualization and navigation of 3D data. They argued that the future of VR in research lies in data treatment and numerical simulations, particularly those involving human cognition and automated algorithms. VR may become as important in imaging data as machine learning is currently. Also, Laghari [31] explored the impact of visual quality and display size on the QoE for online gaming on personal computers (PC) and mobile phones. Their study focused on two different video games, Need for Speed Underground 2 and Subway Surfers, to gather more data on user experience across different genres and types of games. Their study used Mean Opinion Scale (MoS) surveys to capture user feedback and compare the gaming.
Similarly, Illahi [32] showed a prototype cloud-based video gaming service that uses foveated video encoding (FVE) to lessen demands on the host server’s bandwidth, which is essential for sharing HD video. The FVE technique encodes video streams using the user’s gaze direction and the visual system’s non-uniform acuity. The prototype may be used with any game without modifying the game engine. The authors described the testing of the prototype with representative games from various genres to determine the impact of FVE scheme parameters on bandwidth needs and the scheme’s viability from a latency point of view. The authors also shared user research findings on first-person shooter games, which indicated a “sweet spot” for the FVE settings that allows for substantial bandwidth savings without negatively impacting the user experience. Likewise, Nevelsteen [33] addressed the virtual world, which creates difficulties in designing architectures for virtual worlds. He compared his definition to related work and used it to classify advanced technologies, including pseudo-persistent video games, MANets, and virtual and mixed reality. The author provided a breakdown of the properties that set apart these technologies. An ontology shows the relationship between complimentary terms, acronyms, and the use of pseudo-persistence to categorize technologies that only mimic resolution.
Moreover, Jumani [34] focused on the significance of AR-based learning in education and the need for immersive visualization for the virtual environment. The authors highlighted the benefits of AR in both the technology and education sectors. Finally, they discussed the open research issues related to the future of VR. Furthermore, Laghari [35] surveyed and analyzed the previous cloud gaming models, identifying the aspects for future development that can improve QoS. After that, the authors increased the user satisfaction levels according to service level agreements (SLAs). Krahenbuhl [36] introduced the specialized rendering code that produces ground truth labels for tasks like instance segmentation, semantic labeling, depth estimation, optical flow, intrinsic image decomposition, and instance tracking. His approach avoids the expensive and time-consuming process of manual data collection. The author collected a dataset of 220k training images and 60k test images across three video games. So, the author evaluated the algorithms for optical flow, depth estimation, and intrinsic image decomposition. The video game data were found to be visually closer to real-world images than other synthetic datasets.
Likewise, Madiha [37] proposed a QoE/S framework for evaluating gaming services on the fog network, which is cloud-like services at the edge network to support low-latency response requirements. Their framework is based on objective and subjective QoE, captured automatically using agent technology. The proposed model monitors, analyzes, generates reports, and changes policies without administrator intervention. Similarly, Laghari [38] compared the differences and features between VR and AR, which are increasingly popular for creating immersive experiences in videos and games. VR simulates a computer-generated environment, while AR adds simulation components to the real-world scene. The authors explored the various devices and services offered by these technologies. Correspondingly, Laghari [39] examined the QoE of gaming across different methods. The authors surveyed that PC installation provided the best QoE for both games, while gaming online from the cloud server using both devices (Mobile and PC) resulted in the worst user experience. The results suggested that game developers and players should consider the gaming method when aiming for a better gaming experience.
Likewise, Brookes [40] highlighted the importance of VR systems as a powerful tool for human behavior research, as they enable researchers to create 3D visual scenes and measure responses to visual stimuli on a large scale. However, the 3D graphics engines used in VR systems are typically created for game designers. The author developed the unity experiment framework (UXF), which is a set of tools and programming syntaxes that allow researchers to implement data collection. Consequently, Ahir [41] collected deep analysis data regarding VR, which is growing rapidly, with significant investment in research and industry. VR combines technologies to create a virtual atmosphere. VR can dynamically simulate the real world, allowing humans to interact with the environment in real-time. The author summarized the developments in VR technology.
Therefore, Elbamby [42] discussed the challenges and enablers for achieving ultra-reliable and low-latency VR. The authors presented a case study of an interactive VR gaming arcade and the potential of leveraging mmWave communication and edge computing for achieving the future vision of wireless VR. Also, Albaghajati [43] presented an attribute-based framework for classifying and comparing game-testing techniques. It can guide practitioners and researchers. The framework analyzes several prominent techniques for revealing gaps and suggesting future research directions. Biggar [44] presented an analysis of two-player games using response graphs. The author characterized the games that share a response graph with a zero-sum. They discussed some larger games and highlighted the non-trivial properties of the game captured by the response graph. The author suggested that the response graph could aid in understanding the influence of properties such as zero-sum. Moreover, Avola [45] explored innovative methods to improve user experience in virtual reality (VR) by addressing the challenge of locomotion within virtual environments (VEs). Real Walking (RW) is identified as an effective Virtual Locomotion Technique (VLT) due to its reduced cybersickness, but it demands significant real-world space proportional to the virtual space. The authors proposed a novel method to enhance reorientation in VEs using a dynamic Rotation Gain Multiplication Factor (RGMF), which adapts based on the user’s VR competence. This approach aims to optimize the physical space required for VR navigation while minimizing cybersickness. The system was evaluated with both VR experts and novices using established metrics such as the Simulator Sickness Questionnaire (SSQ), the System Usability Score (SUS), and the Witmer presence questionnaire. The results demonstrated the effectiveness of the proposed system, particularly in reducing physical space requirements and cybersickness while maintaining a high-quality user experience.
Furthermore, Kari [46] explored the factors influencing the acceptance and use of virtual reality (VR) games. Recognizing VR as a significant technological trend, the study extended the hedonic-motivation system acceptance model (HMSAM) by incorporating utilitarian and inconvenience factors. An online survey of 473 VR gamers was analyzed using structural equation modeling, revealing that enjoyment is the primary driver of VR gaming intention and immersion. The study found that physical discomfort and VR sickness do not significantly reduce use intentions and immersion levels. The findings highlight the importance of hedonic aspects over utilitarian benefits in driving VR gaming and provide valuable insights for VR game developers and marketers to enhance user experience and adoption.

2.5. Quality of Art Direction

The quality of art direction in gaming graphics is a crucial aspect that can impact the success of a game. Art direction refers to the visual style and design of a game. This includes the game’s color palette, character designs, level designs, and aesthetics. Games with high-quality art direction are visually attractive and provide an immersive experience for players [47,48]. Moreover, we can look at some examples to understand the impact of art direction on a game’s success. Take the game The Legend of Zelda: Breath of the Wild as an example. This game has a unique art style that combines CEL-SHADING with realistic textures to create a visually stunning game world. The game’s art direction plays a significant role in creating a fascinating experience for players, contributing to the game’s success. Another example is the game Limbo, which has a distinct monochromatic art style. The impact of art direction on a game’s success can also be measured through player engagement and sales figures. Games with good art direction tend to receive positive reviews and higher sales figures, indicating that players are attracted to the game’s visual style [49].

2.6. Quality of Texture

The quality of textures in gaming graphics profoundly influences the gaming experience. Textures express the surface characteristics of objects within the game world, and high-quality textures can contribute to the gaming world. An excellent example is Assassin’s Creed Valhalla, where the high-quality textures on buildings, trees, and other objects create an attractive player experience. The impact of texture quality on gaming graphics can be quantitatively measured by examining technical specifications such as the resolution of the texture map. Pixel values often represent texture maps, and the textures’ quality increases as the texture map’s resolution increases. The size of the texture map can also affect the game’s performance, as larger texture maps require more memory and processing power [50].

2.7. Quality of Lighting

Lighting refers to the simulation of how light interacts with the game world. High-quality lighting can create a sense of depth and realism within the game world, while low-quality lighting can make the game world feel flat and uninteresting. Some examples are here to understand the impact of lighting quality on gaming graphics. The game The Last of Us Part II features highly realistic lighting that enhances the mood and atmosphere of the game world [51]. The lighting is dynamic, changing based on the time of day and the player’s position within the game world. This dynamic lighting creates a sense of engagement for the player and contributes to the game’s emotional impact. Another example is the game Fortnite, which features vibrant and colorful lighting essential to the game’s identity. The game’s lighting is designed to be simple and easy to read, making it easier for players to navigate the game world and identify potential threats [52].

2.8. Quality of Animation

Animation refers to the movement of objects and characters within the game world. High-quality animation can make characters and objects feel more lifelike and provide a more enjoyable experience for players. Some examples related to animated games are given here. The game Red Dead Redemption 2 features highly realistic animation that enhances the overall immersion of the game. The animation is highly detailed, with characters moving naturally and fluidly. The attention to detail in the animation contributes to the overall sense of realism in the game world. Likewise, the game Undertale features simple but effective animation that conveys emotion and personality for each character. The animation is highly stylized, with characters moving in a jerky and exaggerated manner, which adds to the game’s charm and personality. In computer graphics, the animation is often represented by algorithms that simulate the behavior of objects in the real world [53,54].

2.9. Quality of Performance

The performance quality in gaming graphics is often measured regarding frame rate and resolution. Frame rate refers to the number of frames displayed per second (fps) on the screen, while resolution refers to the number of pixels. A higher frame rate can provide a smoother and more fluid gaming experience, making it easier for players to control their characters and react to in-game events. A lower frame rate, on the other hand, can make the game feel choppy and unresponsive, which can be frustrating for players. Similarly, a higher resolution can provide a sharper and more detailed image, making the game world more enjoyable. A lower resolution can result in a clearer and more detailed image, detracting from the gaming experience. Game developers often use performance optimization techniques to ensure that their games run smoothly and efficiently [55,56].

3. Methodology

3.1. Gaming Environment

Several studies were conducted to evaluate the QoE of the user while playing a game on a mobile device within a VR box. Figure 1 shows the experimental model of VR game, graphical environment of our model. In this model, the first user wears the Lenovo Mirage Solo VR headset and checks the network type (Wi-Fi or 5G). On game cloud, the user will select our targeted games, which may be a high-quality or low-quality game. When the user finishes the game, then quickly they will submit the QoE.
In Figure 2, a real-time experimental environment is shown, where players play the game alone for half an hour and play both games one by one on a single network. We gave them a VR headset and our game with a high resolution, which is connected to Wi-Fi, and they played. After playing, we gave them our QoE form, which is related to game quality. One person can play four games related to Wi-Fi (high- and low-quality games) and 5G (high- and low-quality games).
Likewise, two networks were selected for this experiment; one is the Wi-Fi network, and the second is the mobile data 5G network. After that, we invited 65 players for this experiment, but 51 players actively participated. Unfortunately, 5 players did not understand the game and experiment level. Moreover, these 46 participants were divided into two categories, males and females: 19 participants were females, and 27 were males between 22 and 30 years of age. Most of the users had 10 years of gaming experience, a small number of players had less than 10 years of experience, and a few were new to gaming. Furthermore, we collected all active participants’ consent in hard form, and they were asked to play the game for a minimum of half an hour per game. Each game session depended on 30 min for one game, and a total of 4 sessions were performed in this experiment for both networks. Our participants were flexible enough to play both games on one network in a single day, and they came back another day to play the same games on another network. When they finished the game, we shared one subjective QoE Google form, and they were asked to fill out a questionnaire. The next day, the same procedure was performed for the second network. The questionnaire employed a five-point MoS used in QoE research to gauge the respondents’ satisfaction levels with various aspects of their experience, such as image quality, graphics quality, resolution, and buffering. This whole experiment procedure was performed in 4 months because most participants had issues with timing and freeing their schedules. The whole experiment depended upon these parameters: “blurr”, “resolution fluctuation/jagged”, “graphics quality”, and “buffering”. With the help of these parameters, it is clearly noticed that networks can affect every type of VR game.

3.2. Network and Device Selection

The standards developed by the ITU-BT [57] and the ITU-T Rec [58] are the ones that we used while developing VR Mobile Games. GPU: Mali-G72 MP3; CPU: Octa-Core 2300MHz Samsung Galaxy A51; four 2.3GHz Cortex-A73 cores and four 1.7GHz Cortex-A53 cores; 64-bit architecture. Online access to the games was granted to players using a cloud or server provider [59]. To check the network’s speed, the well-known website http://www.speedtest.net (accessed on 6 June 2023). was used. Download speeds for mobile data on the 5G network were tested at 10.80 MB/s, and upload rates were measured at 11.78 MB/s. In comparison, download speeds for Wi-Fi were measured at 24.41 MB/s, and upload speeds were measured at 40.87 MB/s. Mobile VR gaming became possible thanks to the 5G network. Testing of the QoE during real-time games over a 5G network found that users were less satisfied with both the visual and display quality.

3.3. Game Selections

For this experiment, we selected two games. One is a high-quality graphics game, Gunjack 2: End of Shift, and the second is a low-quality graphics game, Lamper VR; Figure 3 shows the visualization of both games using a VR box. These two games were selected from the cloud.

4. Result and Discussion

Results and Analysis: High-Quality and Low-Quality Gaming Experience through VR Box

MoS data of the high- and low-quality VR games were collected. Figure 4 shows the graph of MoS data. Blur is a measure of how blurry the video is. The lower the blur score, the less blurry the video. Wi-Fi-H (Wi-Fi-High) means a high-quality game played on the Wi-Fi network. Wi-Fi-L (Wi-Fi-low) means a low-quality game played on the Wi-Fi network.
Wi-Fi-H has the lowest blur score, followed by 5G-H, Wi-Fi-L, and 5G-L. It indicates that Wi-Fi-H and 5G-H have the best video quality in a blur. Resolution fluctuation/jagged is a measure of how jagged the video appears. The lower the resolution fluctuation/jagged score, the less jagged the video appears. Wi-Fi-L has the lowest resolution fluctuation/jagged score, followed by Wi-Fi-H, 5G-H, and 5G-L. It indicates that Wi-Fi-L has the best video quality regarding resolution fluctuation/jagged. Graphics quality is a measure of the overall quality of video graphics. The higher the graphics quality score, the better the video graphics. 5G-H has the highest graphics quality score, followed by Wi-Fi-H, 5G-L, and Wi-Fi-L. It indicates that 5G-H has the best video quality in terms of graphics quality. Buffering measures how long it takes for the video to play after clicking on it. The lower the buffering score, the faster the video starts playing. 5G-H has the lowest buffering score, followed by Wi-Fi-H, 5G-L, and Wi-Fi-L. It indicates that 5G-H has the fastest video streaming speed.
Table 1 shows the MoS data related to the high- and low-quality games. Users gave their opinions after playing the game. The results show that the perceptual performance of different network features is significantly different. There may be a relationship between network type and visual sharpness, as respondents perceived Wi-Fi-L and 5G-L networks to be fuzzier than Wi-Fi-H and 5G-H networks. Similarly, in 5G-L and Wi-Fi-L connections, the resolution changes are more pronounced, suggesting that network speed and image consistency can be trade-offs. It is worth noting that Wi-Fi-L scored slightly lower than other networks, although they all received decent ratings for image quality. It is worth noting that buffering becomes a serious problem for Wi-Fi-L and 5G-L networks compared to similar networks. These results highlight the complexity of evaluating network performance and underscore the importance of considering specific factors when evaluating user pleasure and experience. The graphical representations of the ANOVA results are in Figure 5.
Table 2 shows the results of the ANOVA test for four different parameters (blur, resolution fluctuation/jagged, graphics quality, and buffering) and four different network types (Wi-Fi-H, Wi-Fi-L, 5G-H, and 5G-L), where the sum of squares measures the variation in data, and there are three SS types in ANOVA analysis (Between groups SS, Within group SS and Total SS). After that, degrees of freedom (df) measures the number of values in the final calculation of a static that are free to vary. There are two degrees in df (between the groups and within the groups). In the last, the mean square is calculated by dividing the SS by the df. So, it can calculate the average variation within or between groups. F-Statistic is the ratio of the mean square for between and within groups. It is used to test the data to see whether there are statistically significant differences. Significance (Sig) is the means level, and mostly, it is denoted as a p-value, which is associated with the F-Statistic. The ANOVA test is used to assess whether there is a statistically significant difference between the means of two or more groups. An ANOVA test is performed here to check whether there is a statistically significant difference between the means of the four network types across all four parameters. The F-statistic is shown in the table for each test. If the F-statistic is greater than 1, then there is a statistically significant difference between the group means. In this case, each F-statistic is more than 1, showing a statistically significant split in the means of the four network types across all four parameters. The p-value for each test is also shown in the table. When comparing means across groups, a p-value of less than 0.05 shows statistical significance. Each p-value is less than 0.05. Therefore, there is a statistically significant difference between the means of the four different kinds of networks for each parameter.
Table 3 shows significant differences in the mean scores for the four dependent variables (Image blur, low resolution, graphic quality, and buffering) between the groups (Wi-Fi High, Wi-Fi Low, 5G High, and 5G Low).
Users on Wi-Fi provided high QoE ratings with significantly less image blur than Wi-Fi Low or 5G High users. It is likely because Wi-Fi High has a higher bandwidth than Wi-Fi Low or 5G High. It means that users on Wi-Fi High have more data available, which can lead to a better user experience. The mean difference in image blur between Wi-Fi High and Wi-Fi Low is −0.674, with a p-value of 0.05. It means that users on Wi-Fi High experience significantly less image blur than users on Wi-Fi Low. The mean difference in image blur between Wi-Fi High and 5G High is 0.522, with a p-value of 0.000. It means that users on Wi-Fi High experience significantly less image blur than users on 5G High. Figure 6 shows the graphical representation of image blur.
Users on Wi-Fi provided high QoE with significantly higher resolution than Wi-Fi Low or 5G High users. It is likely because Wi-Fi High has a higher bandwidth than Wi-Fi Low or 5G High. It means that users on Wi-Fi High have more data available, which can lead to a better user experience. The mean difference in low resolution between Wi-Fi High and Wi-Fi Low is -1.565, with a p-value of 0.000. It means that users on Wi-Fi High experience significantly higher resolution than users on Wi-Fi Low. The mean difference in low resolution between Wi-Fi High and 5G High is 0.978, with a p-value of 0.000. It means that users on Wi-Fi High experience significantly higher resolution than users on 5G High. Figure 7 shows the graphical representation of low resolution or image jagged.
Users on Wi-Fi provided high QoE ratings and significantly higher graphic quality than Wi-Fi Low or 5G High users. This is likely because Wi-Fi High has a higher bandwidth than Wi-Fi Low or 5G High. This means that users on Wi-Fi High have more data available, which can lead to a better user experience. The mean difference in graphic quality between Wi-Fi High and Wi-Fi Low is 1.087, with a p-value of 0.000. This means that users on Wi-Fi High experience significantly higher graphic quality than users on Wi-Fi Low. The mean difference in graphic quality between Wi-Fi High and 5G High is −0.500, with a p-value of 0.025. This means that users on Wi-Fi High experience significantly higher graphic quality than users on 5G High. Figure 8 shows the graphical representation of graphics quality.
Users on Wi-Fi High experience significantly less buffering than Wi-Fi Low or 5G High users. This is likely because Wi-Fi High has a higher bandwidth than Wi-Fi Low or 5G High. This means that users on Wi-Fi High have more data available, which can lead to a better user experience. The mean difference in buffering between Wi-Fi High and Wi-Fi Low is −2.196, with a p-value of 0.000. This means that users on Wi-Fi High experience significantly less buffering than users on Wi-Fi Low. The mean difference in buffering between Wi-Fi High and 5G High is −1.087, with a p-value of 0.000. This means that users on Wi-Fi High experience significantly less buffering than users on 5G High. Figure 9 shows the graphical representation of buffering.
Table 4 shows the results of Levene’s test of homogeneity of variance for four different parameters (blur, resolution fluctuation/jagged, graphics quality, and buffering) and four different network types (Wi-Fi-H, Wi-Fi-L, 5G-H, and 5G-L). Levene’s statistical test is used to determine whether the variances of two or more groups are equal. In this case, Levene’s test is being used to determine whether the variances of the four network types are equal for each parameter. The p-value for each test is shown in the table. A p-value of less than 0.05 indicates a significant difference between the variances of the groups. In this case, all of the p-values are less than 0.05, indicating a significant difference between the variances of the four network types for all four parameters. Figure 10, Figure 11, Figure 12 and Figure 13 shows the test of homogeneity of variance in these four parameters.
In Table 5, the results of the Kendall W test show that there is a statistically significant association between the variables examined. When the sample size is 184, Kendall’s W coefficient is calculated to be 0.141. The chi-square value associated with this test is 77.931 with 3 degrees of freedom. The calculated asymptotic significance value is less than 0.001 (p < 0.001), indicating a highly significant relationship between the analyzed variables. This suggests a non-random association between the variables, providing evidence for rejecting the uncorrelated null hypothesis. Therefore, the hypothesis of a relationship or consistency between the investigated variables is supported.
Moreover, the Kruskal–Wallis test was used to analyze the differences in image quality, image blur, low resolution, and buffering among the groups, and the grouping variables were network type (high Wi-Fi, low Wi-Fi, high 5G, and low 5G). In Table 6, there is a statistically significant difference in graph quality between the groups (“X2” = 64.017, “df” = 3, “p” < 0.001), among which the 5G High group ranks the highest on average, followed by the Wi-Fi High group, 5G Low group, and Wi-Fi Low group.
Similarly, there were significant differences in blur, low resolution, and buffering among all groups (“X2” = 22.980, “df” = 3, “p” < 0.001; “Chi-square” = 59.650, “df” = 3, “p” < 0.001; “X2” = 66.143, “df” = 3, “p” < 0.001). Results are shown in Table 7. In each case, the specific pattern of average ranking varies for different network types. Taken together, these results mean that the type of network (high Wi-Fi, low Wi-Fi, high 5G, and low 5G) has a significant impact on the perceived experience quality of users in terms of graphics quality, image blur, low resolution, and buffering when participating in test content.
In terms of the interpretation of Table 8, there are a total of four groups (1 = Image-blur, 2 = Low-resolution, 3 = Graphics-Quality, and 4 = Buffering), and a total of six pairs were generated from the four groups during the analysis (Groups 1 and 2 (Wi-Fi H vs. Wi-Fi L), Groups 1 and 3 (Wi-Fi H vs. 5G H), Groups 1 and 4 (Wi-Fi H vs. 5G L), Groups 2 and 3 (Wi-Fi L vs. 5G H), Groups 2 and 4 (Wi-Fi L vs. 5G L), and Groups 3 and 4 (5G H vs. 5G L)).
For image blur, the t-values for the comparisons are −2.380 (Wi-Fi H vs. Wi-Fi L), 1.635 (Wi-Fi H vs. 5G H), −1.373 (Wi-Fi H vs. 5G L), 4.682 (Wi-Fi L vs. 5G H), 1.338 (Wi-Fi-L vs. 5G-L), and −3.549 (5G-H vs. 5G-L). The corresponding significance levels (sig) are 0.019, 0.105, 0.173, 0.000, 0.330, and 0.010, respectively. The confidence intervals (CI) for these comparisons are as follows: for Wi-Fi H vs. Wi-Fi L, CI (Lower) = −1.237 and CI (Upper) = −0.111; for Wi-Fi H vs. 5G H, CI (Lower) = −0.112 and CI (Upper) = 1.156; for Wi-Fi H vs. 5G L, CI (Lower) = −0.957 and CI (Upper) = 0.175; for Wi-Fi L vs. 5G H, CI (Lower) = 0.688 and CI (Upper) = 1.703; for Wi-Fi-L vs. 5G-L, CI (Lower) = −0.137 and CI (Upper) = 0.702; and for 5G-H vs. 5G-L, CI (Lower) = −1.424 and CI (Upper) = −0.400.
For low resolution, the t values for the comparisons are −5.529 (Wi-Fi H vs. Wi-Fi L), −3.405 (Wi-Fi H vs. 5G H), −8.557 (Wi-Fi H vs. 5G L), 2.816 (Wi-Fi L vs. 5G H), −3.856 (Wi-Fi-L vs. 5G-L), and −7.130 (5G-H vs. 5G-L). The corresponding significance levels (sig) are 0.000, 0.001, 0.000, 0.006, 0.030, and 0.000, respectively. The confidence intervals (CI) for these comparisons are as follows: for Wi-Fi H vs. Wi-Fi L, CI (Lower) = −2.128 and CI (Upper) = −1.003; for Wi-Fi H vs. 5G H, CI (Lower) = −1.549 and CI (Upper) = −0.407; for Wi-Fi H vs. 5G L, CI (Lower) = −2.705 and CI (Upper) = −1.686; for Wi-Fi L vs. 5G H, CI (Lower) = 0.173 and CI (Upper) = 1.001; for Wi-Fi-L vs. 5G-L, CI (Lower) = −0.955 and CI (Upper) = −0.310; and for 5G-H vs. 5G-L, CI (Lower) = −1.557 and CI (Upper) = −0.880.
For graphic quality, the t values for the comparisons are 5.059 (Wi-Fi H vs. Wi-Fi L), −2.962 (Wi-Fi H vs. 5G H), 0.468 (Wi-Fi H vs. 5G L), −10.104 (Wi-Fi L vs. 5G H), −5.703 (Wi-Fi-L vs. 5G-L), and 5.133 (5G-H vs. 5G-L). The corresponding significance levels (sig) are 0.000, 0.004, 0.641, 0.000, 0.090, and 0.010, respectively. The confidence intervals (CI) for these comparisons are as follows: for Wi-Fi H vs. Wi-Fi L, CI (Lower) = 0.660 and CI (Upper) = 1.514; for Wi-Fi H vs. 5G H, CI (Lower) = −0.835 and CI (Upper) = −0.165; for Wi-Fi H vs. 5G L, CI (Lower) = −0.282 and CI (Upper) = 0.456; for Wi-Fi L vs. 5G H, CI (Lower) = −1.899 and CI (Upper) = −1.275; for Wi-Fi-L vs. 5G-L, CI (Lower) = −1.348 and CI (Upper) = −0.650; and for 5G-H vs. 5G-L, CI (Lower) = 0.360 and CI (Upper) = 0.814.
For Buffering, the t values for the comparisons are −10.232 (Wi-Fi H vs. Wi-Fi L), −4.388 (Wi-Fi H vs. 5G H), −6.417 (Wi-Fi H vs. 5G L), 6.745 (Wi-Fi L vs. 5G H), 3.477 (Wi-Fi-L vs. 5G-L), and −2.493 (5G-H vs. 5G-L). The corresponding significance levels (sig) are 0.000, 0.000, 0.000, 0.000, 0.000, and 0.330, respectively. The confidence intervals (CI) for these comparisons are as follows: for Wi-Fi H vs. Wi-Fi L, CI (Lower) = −2.622 and CI (Upper) = −1.769; for Wi-Fi H vs. 5G H, CI (Lower) = −1.579 and CI (Upper) = −0.595; for Wi-Fi H vs. 5G L, CI (Lower) = −2.107 and CI (Upper) = −1.111; for Wi-Fi L vs. 5G H, CI (Lower) = 0.782 and CI (Upper) = 1.435; for Wi-Fi-L vs. 5G-L, CI (Lower) = 0.252 and CI (Upper) = 0.250; and for 5G-H vs. 5G-L, CI (Lower) = −0.938 and CI (Upper) = −0.110.

5. Proposed Future Work of Graphics Cloud Games

Figure 14 shows the proposed framework; in this framework, our proposed mechanism enhances the quality of cloud games. There are a few steps, which are explained below.
Automatic Adaptive Settings: Automatically adjust the graphics settings of a game based on the capabilities of the user’s device and the network conditions. This feature ensures the game runs smoothly and without interruptions, regardless of the user’s device or location. The adaptive settings feature analyzes the user’s device and network conditions, including the speed and stability of the internet connection, the device’s processing power, and available memory. Based on this analysis, the game’s graphics settings are automatically adjusted to ensure optimal performance.
Use predictive analytics: Use predictive analytics to analyze user behavior and predict which games they will likely play next. By anticipating user preferences, the cloud gaming service can optimize its hardware and infrastructure to provide a better gaming experience.
Implement real-time AI upscaling: Implement real-time AI upscaling technology to improve the resolution of games running on the cloud. This technology uses machine learning algorithms to enhance the quality of images in real-time, resulting in higher resolution and better visual quality.
Use dynamic latency prediction: Many applications run on cloud. Whenever a user requests any game, then the user’s internet can be adjusted to that game.

6. Conclusions

The importance of graphic quality in gaming is evident because it has become a crucial aspect of game development. In recent years, game developers have invested heavily in graphics to create more immersive and visually stunning games. The result has been an increase in the popularity of games with high-quality graphics and revenue for game developers. In this paper, we analyzed the experimental data and determined that graphics quality is more important for game players. Because when a player plays a game, they want to feel the game and remarkable enjoyment. During the experiment, it was noticed that network performance is also important for any high-quality game. Following the study, developers and gaming companies should focus on the graphics quality and provide standard network guidelines. If someone is playing a game, their network should fit that bandwidth so users can easily understand and enjoy the game.

Author Contributions

A.K.J.: conceptualization of this study, Methodology, Data Analysis, Writing-original draft preparation. J.S.: Supervision for looking experiments and paper drafting, A.A.L.: collaborative supervision for paper experiments, V.V.E.: co-supervision for experimental work, G.A.S.: Software SPSS data, A.A.: proof reading, N.K.: final polishing this draft and A.u.N.: collaborate work for Collecting user QoE. All authors have read and agreed to the published version of the manuscript.

Funding

National Key Research and Development Project (2020YFC2005700) Guangdong Basic and Applied Basic Research Foundation under grant 2021A1515410005 and 2021A1515012627.

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. Ethics approval is not required for this type of study. The study was conducted following thelocal legislation: https://www.itu.int/pub/R-REC (accessed on 13 June 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are not publicly available “because we have collected results through google forms after the game play” they are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tyagi, M.; Agarwal, R.; Rai, A. Factors affecting gaming engines: Analytical Study. In Proceedings of the 2023 International Conference on Computer Communication and Informatics, ICCCI 2023, Coimbatore, India, 23–25 January 2023. [Google Scholar] [CrossRef]
  2. Ishak, S.A.; Hasran, U.A.; Din, R. Media Education through Digital Games: A Review on Design and Factors Influencing Learning Performance. Educ. Sci. 2023, 13, 102. [Google Scholar] [CrossRef]
  3. Kim, J.H.; Kang, K.H. The effect of promotion on gaming revenue: A study of the US casino industry. Tour. Manag. 2018, 65, 317–326. [Google Scholar] [CrossRef]
  4. Speelman, E.N.; Rodela, R.; Doddema, M.; Ligtenberg, A. Serious gaming as a tool to facilitate inclusive business; a review of untapped potential. Curr. Opin. Environ. Sustain. 2019, 41, 31–37. [Google Scholar] [CrossRef]
  5. Awan, O.; Dey, C.; Salts, H.; Brian, J.; Fotos, J.; Royston, E.; Braileanu, M.; Ghobadi, E.; Powell, J.; Chung, C.; et al. Making Learning Fun: Gaming in Radiology Education. Acad. Radiol. 2019, 26, 1127–1136. [Google Scholar] [CrossRef]
  6. Toh, W. The Player Experience and Design Implications of Narrative Games. Int. J. Hum.-Comput. Interact. 2023, 39, 2742–2769. [Google Scholar] [CrossRef]
  7. Liu, Q.; Yuan, H.; Hamzaoui, R.; Su, H.; Hou, J.; Yang, H. Reduced reference perceptual quality model with application to rate control for video-based point cloud compression. IEEE Trans. Image Process. 2021, 30, 6623–6636. [Google Scholar] [CrossRef]
  8. Palestini, C.; Meschini, A.; Perticarini, M.; Basso, A. Neural Networks as an Alternative to Photogrammetry. Using Instant NeRF and Volumetric Rendering. In Beyond Digital Representation: Advanced Experiences in AR and AI for Cultural Heritage and Innovative Design; Springer: Cham, Switzerland, 2024. [Google Scholar] [CrossRef]
  9. Korkut, E.H.; Surer, E. Visualization in virtual reality: A systematic review. Virtual Real. 2023, 27, 1447–1480. [Google Scholar] [CrossRef]
  10. Yang, F.; Ding, X.; Liu, Y.; Ma, F. Inter-reflection compensation for immersive projection display. Multimed. Tools Appl. 2024, 83, 10427–10443. [Google Scholar] [CrossRef]
  11. Choi, H.; Oh, J.; Chung, J.; Alexandropoulos, G.C.; Choi, J. WiThRay: A Versatile Ray-Tracing Simulator for Smart Wireless Environments. IEEE Access 2023, 11, 56822–56845. [Google Scholar] [CrossRef]
  12. Vrana, V.; Das, S. Dynamic Restructuring of Digital Media and Entertainment Sector: Role of Urbanization, Industrial Innovation, and Technological Evolution. In Digital Entertainment as Next Evolution in Service Sector: Emerging Digital Solutions in Reshaping Different Industries; Springer: Singapore, 2023; pp. 15–34. [Google Scholar] [CrossRef]
  13. Goh, E.; Al-Tabbaa, O.; Khan, Z. Unravelling the complexity of the Video Game Industry: An integrative framework and future research directions. Telemat. Inform. Rep. 2023, 12, 100100. [Google Scholar] [CrossRef]
  14. Pan, S.; Xu, G.J.; Guo, K.; Park, S.H.; Ding, H. Cultural insights in souls-like games: Analyzing player behaviors, perspectives, and emotions across a multicultural context. IEEE Trans. Games 2024, 1–12. [Google Scholar] [CrossRef]
  15. Pan, S.; Xu, G.J.; Guo, K.; Park, S.H.; Ding, H. Video-based engagement estimation of game streamers: An interpretable multimodal neural network approach. IEEE Trans. Games 2023, 1–12. [Google Scholar] [CrossRef]
  16. Kovalenko, V.V.; Marienko, M.V.; Sukhikh, A.S. Use of Augmented and Virtual Reality Tools in a General Secondary Education Institution in the Context of Blended Learning. Inf. Technol. Learn. Tools 2021, 86, 70–86. [Google Scholar] [CrossRef]
  17. Zhang, Q.; Wang, K.; Zhou, S. Application and practice of vr virtual education platform in improving the quality and ability of college students. IEEE Access 2020, 8, 162830–162837. [Google Scholar] [CrossRef]
  18. Spagnolo, F.; Corsonello, P.; Frustaci, F.; Perri, S. Design of a Low-Power Super-Resolution Architecture for Virtual Reality Wearable Devices. IEEE Sens. J. 2023, 23, 9009–9016. [Google Scholar] [CrossRef]
  19. Cheng, D.; Chen, L.; Lv, C.; Guo, L.; Kou, Q. Light-guided and cross-fusion U-Net for anti-illumination image super-resolution. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8436–8449. [Google Scholar] [CrossRef]
  20. Yang, S.; Jin, Y.; Lei, J.; Zhang, S. Multi-directional guidance network for fine-grained visual classification. Vis. Comput. 2024, 1–12. [Google Scholar] [CrossRef]
  21. Li, Y.; Yue, J.; Wang, J.; Tang, C.; He, Y.; Jia, W.; Zou, K.; Zhang, L.; Yang, H.; Liu, Y. A Weight-Reload-Eliminated Compute-in-Memory Accelerator for 60 fps 4K Super-Resolution. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 1179–1183. [Google Scholar] [CrossRef]
  22. Yin, L.; Wang, L.; Lu, S.; Wang, R.; Yang, Y.; Yang, B.; Liu, S.; AlSanad, A.; AlQahtani, S.A.; Yin, Z.; et al. Convolution-Transformer for Image Feature Extraction. Comput. Model. Eng. Sci. 2024, 1–20. [Google Scholar] [CrossRef]
  23. Madhusudana, P.C.; Soundararajan, R. Subjective and objective quality assessment of stitched images for virtual reality. IEEE Trans. Image Process. 2019, 28, 5620–5635. [Google Scholar] [CrossRef]
  24. Duan, H.; Min, X.; Sun, W.; Zhu, Y.; Zhang, X.P.; Zhai, G. Attentive Deep Image Quality Assessment for Omnidirectional Stitching. IEEE J. Sel. Top. Signal Process. 2023, 17, 1150–1164. [Google Scholar] [CrossRef]
  25. Riecke, B.E.; Clement, D.; Adhikari, A.; Quesnel, D.; Zielasko, D.; Von Der Heyde, M. HyperJumping in Virtual Vancouver: Combating Motion Sickness by Merging Teleporting and Continuous VR Locomotion in an Embodied Hands-Free VR Flying Paradigm. In Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference, Vancouver, BC, Canada, 7–11 August 2022. [Google Scholar] [CrossRef]
  26. Qi, F.; Tan, X.; Zhang, Z.; Chen, M.; Xie, Y.; Ma, L. Glass makes blurs: Learning the visual blurriness for glass surface detection. IEEE Trans. Ind. Inform. 2024, 20, 6631–6641. [Google Scholar] [CrossRef]
  27. Caputo, A.; Zancanaro, M.; Giachetti, A. Eyes on Teleporting: Comparing Locomotion Techniques in Virtual Reality with Respect to Presence, Sickness and Spatial Orientation. In Human-Computer Interaction—INTERACT 2023; Springer: Cham, Switzerland, 2023; Volume 14144, pp. 547–566. [Google Scholar] [CrossRef]
  28. Zhou, T.; Cai, Z.; Liu, F.; Su, J. In pursuit of beauty: Aesthetic-aware and context-adaptive photo selection in crowdsensing. IEEE Trans. Knowl. Data Eng. 2023, 35, 9364–9377. [Google Scholar] [CrossRef]
  29. Xu, M.; Li, C.; Zhang, S.; Callet, P.L. State-of-the-Art in 360 Video/Image Processing: Perception, Assessment and Compression. IEEE J. Sel. Top. Signal Process. 2020, 14, 5–26. [Google Scholar] [CrossRef]
  30. El Beheiry, M.; Doutreligne, S.; Caporal, C.; Ostertag, C.; Dahan, M.; Masson, J.B. Virtual Reality: Beyond Visualization. J. Mol. Biol. 2019, 431, 1315–1321. [Google Scholar] [CrossRef]
  31. Laghari, A.A.; Laghari, K.u.R.; Memon, K.A.; Soomro, M.B.; Laghari, R.A.; Kumar, V. Quality of experience (QoE) assessment of games on workstations and mobile. Entertain. Comput. 2020, 34, 100362. [Google Scholar] [CrossRef]
  32. Illahi, G.K.; Gemert, T.V.; Siekkinen, M.; Masala, E.; Oulasvirta, A.; Ylä-Jääski, A. Cloud Gaming with Foveated Video Encoding. ACM Trans. Multimed. Comput. Commun. Appl. 2020, 16, 1–24. [Google Scholar] [CrossRef]
  33. Nevelsteen, K.J. Virtual world, defined from a technological perspective and applied to video games, mixed reality, and the Metaverse. Comput. Animat. Virtual Worlds 2018, 29, e1752. [Google Scholar] [CrossRef]
  34. Jumani, A.K.; Siddique, W.A.; Laghari, A.A.; Abro, A.; Khan, A.A. Virtual reality and augmented reality for education. In Multimedia Computing Systems and Virtual Reality; CRC Press: Boca Raton, FL, USA, 2022; pp. 189–210. [Google Scholar]
  35. Laghari, A.A.; He, H.; Memon, K.A.; Laghari, R.A.; Halepoto, I.A.; Khan, A. Quality of experience (QoE) in cloud gaming models: A review. Multiagent Grid Syst. 2019, 15, 289–304. [Google Scholar] [CrossRef]
  36. Krahenbuhl, P. Free Supervision from Video Games. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2955–2964. [Google Scholar] [CrossRef]
  37. Madiha, H.; Lei, L.; Laghari, A.A.; Karim, S. Quality of experience and quality of service of gaming services in fog computing. In Proceedings of the ICMSS 2020: 2020 4th International Conference on Management Engineering, Software Engineering and Service Sciences, Wuhan, China, 17–19 January 2020; pp. 225–228. [Google Scholar] [CrossRef]
  38. Laghari, A.A.; Jumani, A.K.; Kumar, K.; Chhajro, M.A. Systematic Analysis of Virtual Reality and Augmented Reality. Int. J. Inf. Eng. Electron. Bus. 2021, 13, 36–43. [Google Scholar] [CrossRef]
  39. Laghari, A.A.; Laghari, R.A.; Khan, A. Quality of Experience Assessment of Online Server/Cloud Gaming. In Proceedings of the 2022 8th Annual International Conference on Network and Information Systems for Computers, ICNISC 2022, Hangzhou, China, 16–19 September 2022; pp. 834–837. [Google Scholar] [CrossRef]
  40. Brookes, J.; Warburton, M.; Alghadier, M.; Mon-Williams, M.; Mushtaq, F. Studying human behavior with virtual reality: The Unity Experiment Framework. Behav. Res. Methods 2020, 52, 455–463. [Google Scholar] [CrossRef] [PubMed]
  41. Ahir, K.; Govani, K.; Gajera, R.; Shah, M. Application on Virtual Reality for Enhanced Education Learning, Military Training and Sports. Augment. Hum. Res. 2020, 5, 7. [Google Scholar] [CrossRef]
  42. Elbamby, M.S.; Perfecto, C.; Bennis, M.; Doppler, K. Toward Low-Latency and Ultra-Reliable Virtual Reality. IEEE Netw. 2018, 32, 78–84. [Google Scholar] [CrossRef]
  43. Albaghajati, A.; Ahmed, M. Video Game Automated Testing Approaches: An Assessment Framework. IEEE Trans. Games 2023, 15, 81–94. [Google Scholar] [CrossRef]
  44. Biggar, O.; Shames, I. The graph structure of two-player games. Sci. Rep. 2023, 13, 1833. [Google Scholar] [CrossRef]
  45. Avola, D.; Cinque, L.; Foresti, G.L.; Marini, M.R. A novel low cybersickness dynamic rotation gain enhancer based on spatial position and orientation in virtual environments. Virtual Real. 2023, 27, 3191–3209. [Google Scholar] [CrossRef]
  46. Kari, T.; Kosa, M. Acceptance and use of virtual reality games: An extension of HMSAM. Virtual Real. 2023, 27, 1585–1605. [Google Scholar] [CrossRef]
  47. Lee, H.; Jung, T.H.; tom Dieck, M.C.; Chung, N. Experiencing immersive virtual reality in museums. Inf. Manag. 2020, 57, 103229. [Google Scholar] [CrossRef]
  48. Guo, T.; Yuan, H.; Hamzaoui, R.; Wang, X.; Wang, L. Dependence-Based Coarse-to-Fine Approach for Reducing Distortion Accumulation in G-PCC Attribute Compression. IEEE Trans. Ind. Inform. 2024, 1–11. [Google Scholar] [CrossRef]
  49. Berg, L.P.; Vance, J.M. Industry use of virtual reality in product design and manufacturing: A survey. Virtual Real. 2017, 21, 1–17. [Google Scholar] [CrossRef]
  50. Maltezos, E.; Doulamis, A.; Ioannidis, C. Improving the visualisation of 3D textured models via shadow detection and removal. In Proceedings of the 2017 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games, Athens, Greece, 6–8 September 2017; pp. 161–164. [Google Scholar] [CrossRef]
  51. Shin, D.H. The role of affordance in the experience of virtual reality learning: Technological and affective affordances in virtual reality. Telemat. Inform. 2017, 34, 1826–1836. [Google Scholar] [CrossRef]
  52. Dincelli, E.; Yayla, A. Immersive virtual reality in the age of the Metaverse: A hybrid-narrative review based on the technology affordance perspective. J. Strateg. Inf. Syst. 2022, 31, 101717. [Google Scholar] [CrossRef]
  53. Hassapopoulou, M. Playing with history: Collective memory, national trauma, and dark tourism in virtual reality docugames. New Rev. Film Telev. Stud. 2018, 16, 365–392. [Google Scholar] [CrossRef]
  54. Xi, X.; Xi, B.; Miao, C.; Yu, R.; Xie, J.; Xiang, R.; Hu, F. Factors influencing technological innovation efficiency in the Chinese video game industry: Applying the meta-frontier approach. Technol. Forecast. Soc. Chang. 2022, 178, 121574. [Google Scholar] [CrossRef]
  55. Boletsis, C.; Cedergren, J.E. VR Locomotion in the New Era of Virtual Reality: An Empirical Comparison of Prevalent Techniques. Adv. Hum.-Comput. Interact. 2019, 2019, 7420781. [Google Scholar] [CrossRef]
  56. Sun, G.; Xu, Z.; Yu, H.; Chen, X.; Chang, V.; Vasilakos, A.V. Low-latency and resource-efficient service function chaining orchestration in network function virtualization. IEEE Internet Things J. 2019, 7, 5760–5772. [Google Scholar] [CrossRef]
  57. BT, I.T.U. Methodologies for the subjective assessment of the quality of television images. In Document Recommendation ITU-R BT. 500-14 (10/2019); ITU: Geneva, Switzerland, 2020. [Google Scholar]
  58. International Telecommunication Union. Subjective video quality assessment methods for multimedia applications. In ITU-T Recommendation P.910; ITU: Geneva, Switzerland, 2023. [Google Scholar]
  59. Electronics Force. Available online: https://www.electronicsforce.com/index.php?main_page=product_info&products_id=66245 (accessed on 20 May 2024).
Figure 1. Model view of Experiment VR games.
Figure 1. Model view of Experiment VR games.
Electronics 13 02998 g001
Figure 2. Real-time experiment of VR games.
Figure 2. Real-time experiment of VR games.
Electronics 13 02998 g002
Figure 3. Both gaming screens of the high- and low-quality games.
Figure 3. Both gaming screens of the high- and low-quality games.
Electronics 13 02998 g003
Figure 4. Graph of MoS data.
Figure 4. Graph of MoS data.
Electronics 13 02998 g004
Figure 5. ANOVA test results of four parameters.
Figure 5. ANOVA test results of four parameters.
Electronics 13 02998 g005
Figure 6. Bonferroni test data in image blur.
Figure 6. Bonferroni test data in image blur.
Electronics 13 02998 g006
Figure 7. Bonferroni test data of low resolution.
Figure 7. Bonferroni test data of low resolution.
Electronics 13 02998 g007
Figure 8. Bonferroni test data of graphics quality.
Figure 8. Bonferroni test data of graphics quality.
Electronics 13 02998 g008
Figure 9. Bonferroni test data of buffering.
Figure 9. Bonferroni test data of buffering.
Electronics 13 02998 g009
Figure 10. Test of homogeneity of variance in image blur.
Figure 10. Test of homogeneity of variance in image blur.
Electronics 13 02998 g010
Figure 11. Test of homogeneity of variance in low resolution.
Figure 11. Test of homogeneity of variance in low resolution.
Electronics 13 02998 g011
Figure 12. Test of homogeneity of variance of graphics quality.
Figure 12. Test of homogeneity of variance of graphics quality.
Electronics 13 02998 g012
Figure 13. Test of homogeneity of variance buffering.
Figure 13. Test of homogeneity of variance buffering.
Electronics 13 02998 g013
Figure 14. Proposed framework for graphics cloud games.
Figure 14. Proposed framework for graphics cloud games.
Electronics 13 02998 g014
Table 1. MoS data of collected results.
Table 1. MoS data of collected results.
ParametersWi-Fi-HWi-Fi-L5G-H5G-L
Image Blur2.73.42.23.1
Resolution fluctuation/Jagged2.54.13.54.7
Graphics Quality4.33.24.84.2
Buffering2.14.33.23.7
Table 2. Analysis of variance (ANOVA) results.
Table 2. Analysis of variance (ANOVA) results.
Sum of SquaresdfMean SquareFSig.
Image_Blur
Between Groups37.060312.3537.3370.000
Within Groups303.0651801.684
Total340.125183
Low_Resolution
Between Groups120.196340.06531.8800.000
Within Groups226.2171801.257
Total346.413183
Graphic_Quality
Between Groups60.973320.32429.8320.000
Within Groups122.6301800.681
Total183.603183
Buffering
Between Groups120.016340.00538.7100.000
Within Groups186.0221801.033
Total306.038183
Table 3. Bonferroni test data (multiple comparisons).
Table 3. Bonferroni test data (multiple comparisons).
Dependent
Variable
GroupsMean
Diff (I–J)
Std.
Error
Sig.95% CI
(I)(J)L. BU. B
Image BlurWi-Fi HWi-Fi L−0.6740.2710.082−1.400.05
5G H0.5220.2710.332−0.201.24
5G L−0.3910.2710.899−1.110.33
Wi-Fi LWi-Fi H0.6740.2710.082−0.051.40
5G H1.196 *0.2710.0000.471.92
5G L0.2830.2711.000−0.441.00
5G HWi-Fi H−0.5220.2710.332−1.240.20
Wi-Fi L−1.196 *0.2710.000−1.92−0.47
5G L−0.913 *0.2710.005−1.63−0.19
5G LWi-Fi H0.3910.2710.899−0.331.11
Wi-Fi L−0.2830.2711.000−1.000.44
5G H0.913 *0.2710.0050.191.63
Low
Resolution
Wi-Fi HWi-Fi L−1.565 *0.2340.000−2.19−0.94
5G H−0.978 *0.2340.000−1.60−0.35
5G L−2.196 *0.2340.000−2.82−1.57
Wi-Fi LWi-Fi H1.565 *0.2340.0000.942.19
5G H0.5870.2340.078−0.041.21
5G L−0.630 *0.2340.046−1.25−0.01
5G HWi-Fi H0.978 *0.2340.0000.351.60
Wi-Fi L−0.5870.2340.078−1.210.04
5G L−1.217 *0.2340.000−1.84−0.59
5G LWi-Fi H2.196 *0.2340.0001.572.82
Wi-Fi L0.630 *0.2340.0460.011.25
5G H1.217 *0.2340.0000.591.84
Graphic
Quality
Wi-Fi HWi-Fi L1.087 *0.1720.0000.631.55
5G H−0.500 *0.1720.025−0.96−0.04
5G L0.0870.1721.000−0.370.55
Wi-Fi LWi-Fi H−1.087 *0.1720.000−1.55−0.63
5G H−1.587 *0.1720.000−2.05−1.13
5G L−1.000 *0.1720.000−1.46−0.54
5G HWi-Fi H0.500 *0.1720.0250.040.96
Wi-Fi L1.587 *0.1720.0001.132.05
5G L0.587 *0.1720.0050.131.05
5G LWi-Fi H−0.0870.1721.000−0.550.37
Wi-Fi L1.000 *0.1720.0000.541.46
5G H−0.587 *0.1720.005−1.05−0.13
BufferingWi-Fi HWi-Fi L−2.196 *0.2120.000−2.76−1.63
5G H−1.087 *0.2120.000−1.65−0.52
5G L−1.609 *0.2120.000−2.17−1.04
Wi-Fi LWi-Fi H2.196 *0.2120.0001.632.76
5G H1.109 *0.2120.0000.541.67
5G L0.587 *0.2120.0370.021.15
5G HWi-Fi H1.087 *0.2120.0000.521.65
Wi-Fi L−1.109 *0.2120.000−1.67−0.54
5G L−0.5220.2120.089−1.090.04
5G LWi-Fi H1.609 *0.2120.0001.042.17
Wi-Fi L−0.587 *0.2120.037−1.15−0.02
5G H0.5220.2120.089−0.041.09
The mean difference is significant at the 0.05 level. The asterisk (*) symbol indicates that the mean difference is significant at the 0.05 level.
Table 4. Levene statistics: test of homogeneity of variance.
Table 4. Levene statistics: test of homogeneity of variance.
Levene Statisticdf1df2Sig.
Image Blur
Based on Mean11.99231800.000
Based on Median7.05631800.000
Based on Median and with adjusted df7.0563169.3380.000
Based on trimmed mean11.04131800.000
Low Resolution
Based on Mean32.76531800.000
Based on Median19.62531800.000
Based on Median and with adjusted df19.6253159.4110.000
Based on trimmed mean32.86931800.000
Graphic Quality
Based on Mean7.87531800.000
Based on Median3.97831800.009
Based on Median and with adjusted df3.9783123.1620.010
Based on trimmed mean7.62531800.000
Buffering
Based on Mean7.45531800.000
Based on Median5.93531800.001
Based on Median and with adjusted df5.9353155.0290.001
Based on trimmed mean5.85331800.001
Table 5. Kendall’s W test.
Table 5. Kendall’s W test.
ParameterValue
N184
Kendall’s W a0.141
Chi-Square77.931
df3
Asymp. Sig.0.000
a Kendall’s Coefficient of Concordance.
Table 6. Kruskal–Wallis test.
Table 6. Kruskal–Wallis test.
GroupsNMean Rank
Graphic Quality
Wi-Fi H46104.21
Wi-Fi L4647.41
5G H46127.90
5G L4690.48
Total184
Image Blur
Wi-Fi H4685.93
Wi-Fi L46114.13
5G H4665.96
5G L46103.98
Total184
Low Resolution
Wi-Fi H4656.59
Wi-Fi L46102.80
5G H4676.65
5G L46133.96
Total184
Buffering
Wi-Fi H4648.43
Wi-Fi L46132.78
5G H4682.34
5G L46106.45
Total184
Table 7. Test statistics.
Table 7. Test statistics.
GraphicsImageLowBuffering
QualityBlurResolution
Chi-Square64.01722.98059.65066.143
df3333
Asymp. Sig.0.0000.0000.0000.000
Table 8. Student t-test analysis.
Table 8. Student t-test analysis.
Wi-Fi H vs.
Wi-Fi L
Wi-Fi H vs.
5G H
Wi-Fi H vs.
5G L
Wi-Fi L vs.
5G H
Wi-Fi L vs.
5G L
5G H vs.
5G L
Image Blurt−2.3801.635−1.3734.6821.338−3.549
sig0.0190.1050.1730.0000.3300.010
CI (Lower)−1.237−0.112−0.9570.688−0.137−1.424
CI (Upper)−0.1111.1560.1751.7030.702−0.400
Low
Resolution
t−5.529−3.405−8.5572.816−3.856−7.130
sig0.0000.0010.0000.0060.0300.000
CI (Lower)−2.128−1.549−2.7050.173−0.955−1.557
CI (Upper)−1.003−0.407−1.6861.001−0.310−0.880
Graphic
Quality
t5.059−2.9620.468−10.104−5.7035.133
sig0.0000.0040.6410.0000.0900.010
CI (Lower)0.660−0.835−0.282−1.899−1.3480.360
CI (Upper)1.514−0.1650.456−1.275−0.6500.814
Bufferingt−10.232−4.388−6.4176.7453.477−2.493
sig0.0000.0000.0000.0000.0000.330
CI (Lower)−2.622−1.579−2.1070.7820.252−0.938
CI (Upper)−1.769−0.595−1.1111.4350.250−0.110
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jumani, A.K.; Shi, J.; Laghari, A.A.; Estrela, V.V.; Sampedro, G.A.; Almadhor, A.; Kryvinska, N.; ul Nabi, A. Quality of Experience That Matters in Gaming Graphics: How to Blend Image Processing and Virtual Reality. Electronics 2024, 13, 2998. https://doi.org/10.3390/electronics13152998

AMA Style

Jumani AK, Shi J, Laghari AA, Estrela VV, Sampedro GA, Almadhor A, Kryvinska N, ul Nabi A. Quality of Experience That Matters in Gaming Graphics: How to Blend Image Processing and Virtual Reality. Electronics. 2024; 13(15):2998. https://doi.org/10.3390/electronics13152998

Chicago/Turabian Style

Jumani, Awais Khan, Jinglun Shi, Asif Ali Laghari, Vania V. Estrela, Gabriel Avelino Sampedro, Ahmad Almadhor, Natalia Kryvinska, and Aftab ul Nabi. 2024. "Quality of Experience That Matters in Gaming Graphics: How to Blend Image Processing and Virtual Reality" Electronics 13, no. 15: 2998. https://doi.org/10.3390/electronics13152998

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop