Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = spatial reality display method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
7 pages, 5282 KiB  
Proceeding Paper
Tuning the Electrical Resistivity of Molecular Liquid Crystals for Electro-Optical Devices
by Michael Gammon, Iyanna Trevino, Michael Burnes, Noah Lee, Abdul Saeed and Yuriy Garbovskiy
Eng. Proc. 2025, 87(1), 34; https://doi.org/10.3390/engproc2025087034 - 2 Apr 2025
Viewed by 221
Abstract
Modern applications of molecular liquid crystals span from high-resolution displays for augmented and virtual reality to miniature tunable lasers, reconfigurable microwave devices for space exploration and communication, and tunable electro-optical elements, including spatial light modulators, waveguides, lenses, light shutters, filters, and waveplates, to [...] Read more.
Modern applications of molecular liquid crystals span from high-resolution displays for augmented and virtual reality to miniature tunable lasers, reconfigurable microwave devices for space exploration and communication, and tunable electro-optical elements, including spatial light modulators, waveguides, lenses, light shutters, filters, and waveplates, to name a few. The tunability of these devices is achieved through electric-field-induced reorientation of liquid crystals. Because the reorientation of the liquid crystals can be altered by ions normally present in mesogenic materials in minute quantities, resulting in their electrical resistivity having finite values, the development of new ways to control the concentration of the ions in liquid crystals is very important. A promising way to enhance the electrical resistivity of molecular liquid crystals is the addition of nano-dopants to low-resistivity liquid crystals. When nanoparticles capture certain ions, they immobilize them and increase their resistivity. If properly implemented, this method can convert low-resistivity liquid crystals into high-resistivity liquid crystals. However, uncontrolled ionic contamination of the nanoparticles can significantly alter this process. In this paper, building on our previous work, we explore how physical parameters such as the size of the nanoparticles, their concentration, and their level of ionic contamination can affect the process of both enhancing and lowering the resistivity of the molecular liquid crystals. Additionally, we analyze the use of two types of nano-dopants to achieve better control over the electrical resistivity of molecular liquid crystals. Full article
Show Figures

Figure 1

29 pages, 4981 KiB  
Article
SRD Method: Integrating Autostereoscopy and Gesture Interaction for Immersive Serious Game-Based Behavioral Skills Training
by Linkai Lyu, Tianrui Hu, Hongrun Wang and Wenjun Hou
Electronics 2025, 14(7), 1337; https://doi.org/10.3390/electronics14071337 - 27 Mar 2025
Viewed by 245
Abstract
This study focuses on the innovative application of HCI and XR technologies in behavioral skills training (BST) in the digital age, exploring their potential in education, especially experimental training. Despite the opportunities these technologies offer for immersive BST, traditional methods remain mainstream, with [...] Read more.
This study focuses on the innovative application of HCI and XR technologies in behavioral skills training (BST) in the digital age, exploring their potential in education, especially experimental training. Despite the opportunities these technologies offer for immersive BST, traditional methods remain mainstream, with XR devices like HMDs causing user discomfort and current research lacking in evaluating user experience. To address these issues, we propose the spatial reality display (SRD) method, a new BST approach based on spatial reality display. This method uses autostereoscopic technology to avoid HMD discomfort, employs intuitive gesture interactions to reduce learning costs, and integrates BST content into serious games (SGs) to enhance user acceptance. Using the aluminothermic reaction in chemistry experiments as an example, we developed a Unity3D-based XR application allowing users to conduct experiments in a 3D virtual environment. Our study compared the SRD method with traditional BST through simulation, questionnaires, and interviews, revealing significant advantages of SRD in enhancing user skills and intrinsic motivation. Full article
Show Figures

Figure 1

24 pages, 6178 KiB  
Article
HoloGaussian Digital Twin: Reconstructing 3D Scenes with Gaussian Splatting for Tabletop Hologram Visualization of Real Environments
by Tam Le Phuc Do, Jinwon Choi, Viet Quoc Le, Philippe Gentet, Leehwan Hwang and Seunghyun Lee
Remote Sens. 2024, 16(23), 4591; https://doi.org/10.3390/rs16234591 - 6 Dec 2024
Viewed by 1870
Abstract
Several studies have explored the use of hologram technology in architecture and urban design, demonstrating its feasibility. Holograms can represent 3D spatial data and offer an immersive experience, potentially replacing traditional methods such as physical 3D and offering a promising alternative to mixed-reality [...] Read more.
Several studies have explored the use of hologram technology in architecture and urban design, demonstrating its feasibility. Holograms can represent 3D spatial data and offer an immersive experience, potentially replacing traditional methods such as physical 3D and offering a promising alternative to mixed-reality display technologies. Holograms can visualize realistic scenes such as buildings, cityscapes, and landscapes using the novel view synthesis technique. This study examines the suitability of spatial data collected through the Gaussian splatting method for tabletop hologram visualization. Recent advancements in Gaussian splatting algorithms allow for real-time spatial data collection of a higher quality compared to photogrammetry and neural radiance fields. Both hologram visualization and Gaussian splatting share similarities in that they recreate 3D scenes without the need for mesh reconstruction. In this research, unmanned aerial vehicle-acquired primary image data were processed for 3D reconstruction using Gaussian splatting techniques and subsequently visualized through holographic displays. Two experimental environments were used, namely, a building and a university campus. As a result, 3D Gaussian data have proven to be an ideal spatial data source for hologram visualization, offering new possibilities for real-time motion holograms of real environments and digital twins. Full article
(This article belongs to the Special Issue Application of Photogrammetry and Remote Sensing in Urban Areas)
Show Figures

Graphical abstract

9 pages, 2319 KiB  
Article
Augmented Reality Improved Knowledge and Efficiency of Root Canal Anatomy Learning: A Comparative Study
by Fahd Alsalleeh, Katsushi Okazaki, Sarah Alkahtany, Fatemah Alrwais, Mohammad Bendahmash and Ra’ed Al Sadhan
Appl. Sci. 2024, 14(15), 6813; https://doi.org/10.3390/app14156813 - 4 Aug 2024
Cited by 1 | Viewed by 2245
Abstract
Teaching root canal anatomy has traditionally been reliant on static methods, but recent studies have explored the potential of advanced technologies like augmented reality (AR) to enhance learning and address the limitations of traditional training methods, such as the requirement for spatial imagination [...] Read more.
Teaching root canal anatomy has traditionally been reliant on static methods, but recent studies have explored the potential of advanced technologies like augmented reality (AR) to enhance learning and address the limitations of traditional training methods, such as the requirement for spatial imagination and the inability to simulate clinical scenarios fully. This study evaluated the potential of AR as a tool for teaching root canal anatomy in preclinical training in endodontics for predoctoral dental students. Six cone beam computed tomography (CBCT) images of teeth were selected. Board-certified endodontist and radiologist recorded the tooth type and classification of root canals. Then, STereoLithography (STL) files of the same images were imported into a virtual reality (VR) application and viewed through a VR head-mounted display. Forty-three third-year dental students were asked questions about root canal anatomy based on the CBCT images, and then, after the AR model. The time to respond to each question and feedback was recorded. Student responses were paired, and the difference between CBCT and AR scores was examined using a paired-sample t-test and set to p = 0.05. Students demonstrated a significant improvement in their ability to answer questions about root canal anatomy after utilizing the AR model (p < 0.05). Female participants demonstrated significantly higher AR scores compared to male participants. However, gender did not significantly influence overall test scores. Furthermore, students required significantly less time to answer questions after using the AR model (M = 4.09, SD = 3.55) compared to the CBCT method (M = 15.21, SD = 8.01) (p < 0.05). This indicates that AR may improve learning efficiency alongside comprehension. In a positive feedback survey, 93% of students reported that the AR simulation led to a better understanding of root canal anatomy than traditional CBCT interpretation. While this study highlights the potential of AR in learning root canal anatomy, further research is needed to explore its long-term impact and efficacy in clinical settings. Full article
(This article belongs to the Special Issue Virtual/Augmented Reality and Its Applications)
Show Figures

Figure 1

29 pages, 1631 KiB  
Systematic Review
Extended Reality-Based Head-Mounted Displays for Surgical Education: A Ten-Year Systematic Review
by Ziyu Qi, Felix Corr, Dustin Grimm, Christopher Nimsky and Miriam H. A. Bopp
Bioengineering 2024, 11(8), 741; https://doi.org/10.3390/bioengineering11080741 - 23 Jul 2024
Cited by 4 | Viewed by 2115
Abstract
Surgical education demands extensive knowledge and skill acquisition within limited time frames, often limited by reduced training opportunities and high-pressure environments. This review evaluates the effectiveness of extended reality-based head-mounted display (ExR-HMD) technology in surgical education, examining its impact on educational outcomes and [...] Read more.
Surgical education demands extensive knowledge and skill acquisition within limited time frames, often limited by reduced training opportunities and high-pressure environments. This review evaluates the effectiveness of extended reality-based head-mounted display (ExR-HMD) technology in surgical education, examining its impact on educational outcomes and exploring its strengths and limitations. Data from PubMed, Cochrane Library, Web of Science, ScienceDirect, Scopus, ACM Digital Library, IEEE Xplore, WorldCat, and Google Scholar (Year: 2014–2024) were synthesized. After screening, 32 studies comparing ExR-HMD and traditional surgical training methods for medical students or residents were identified. Quality and bias were assessed using the Medical Education Research Study Quality Instrument, Newcastle–Ottawa Scale-Education, and Cochrane Risk of Bias Tools. Results indicate that ExR-HMD offers benefits such as increased immersion, spatial awareness, and interaction and supports motor skill acquisition theory and constructivist educational theories. However, challenges such as system fidelity, operational inconvenience, and physical discomfort were noted. Nearly half the studies reported outcomes comparable or superior to traditional methods, emphasizing the importance of social interaction. Limitations include study heterogeneity and English-only publications. ExR-HMD shows promise but needs educational theory integration and social interaction. Future research should address technical and economic barriers to global accessibility. Full article
Show Figures

Figure 1

16 pages, 6441 KiB  
Article
Three-Dimensional Documentation and Reconversion of Architectural Heritage by UAV and HBIM: A Study of Santo Stefano Church in Italy
by Guiye Lin, Guokai Li, Andrea Giordano, Kun Sang, Luigi Stendardo and Xiaochun Yang
Drones 2024, 8(6), 250; https://doi.org/10.3390/drones8060250 - 6 Jun 2024
Cited by 5 | Viewed by 1878
Abstract
Historic buildings hold significant cultural value and their repair and protection require diverse approaches. With the advent of 3D digitalization, drones have gained significance in heritage studies. This research focuses on applying digital methods for restoring architectural heritage. It utilizes non-contact measurement technology, [...] Read more.
Historic buildings hold significant cultural value and their repair and protection require diverse approaches. With the advent of 3D digitalization, drones have gained significance in heritage studies. This research focuses on applying digital methods for restoring architectural heritage. It utilizes non-contact measurement technology, specifically unmanned aerial vehicles (UAVs), for data collection, creating 3D point cloud models using heritage building information modeling (HBIM), and employing virtual reality (VR) for architectural heritage restoration. Employing the “close + surround” oblique photography technique combined with image matching, computer vision, and other technologies, a detailed and comprehensive 3D model of the real scene can be constructed. It provides crucial data support for subsequent protection research and transformation efforts. Using the case of the Santo Stefano Church in Volterra, Italy, an idealized reconstructed 3D model database was established after data collection to preserve essential resources such as the original spatial data and relationships of architectural sites. Through the analysis of relevant historical data and the implementation of VR, the idealized and original appearance of the case was authentically restored. As a result, in the virtual simulation space, the building’s style was realistically displayed with an immersive experience. This approach not only safeguards cultural heritage but also enhances the city’s image and promotes tourism resources, catering to the diverse needs of tourists. Full article
Show Figures

Figure 1

18 pages, 7366 KiB  
Article
Realistic Texture Mapping of 3D Medical Models Using RGBD Camera for Mixed Reality Applications
by Cosimo Aliani, Alberto Morelli, Eva Rossi, Sara Lombardi, Vincenzo Yuto Civale, Vittoria Sardini, Flavio Verdino and Leonardo Bocchi
Appl. Sci. 2024, 14(10), 4133; https://doi.org/10.3390/app14104133 - 13 May 2024
Cited by 5 | Viewed by 1356
Abstract
Augmented and mixed reality in the medical field is becoming increasingly important. The creation and visualization of digital models similar to reality could be a great help to increase the user experience during augmented or mixed reality activities like surgical planning and educational, [...] Read more.
Augmented and mixed reality in the medical field is becoming increasingly important. The creation and visualization of digital models similar to reality could be a great help to increase the user experience during augmented or mixed reality activities like surgical planning and educational, training and testing phases of medical students. This study introduces a technique for enhancing a 3D digital model reconstructed from cone-beam computed tomography images with its real coloured texture using an Intel D435 RGBD camera. This method is based on iteratively projecting the two models onto a 2D plane, identifying their contours and then minimizing the distance between them. Finally, the coloured digital models were displayed in mixed reality through a Microsoft HoloLens 2 and an application to interact with them using hand gestures was developed. The registration error between the two 3D models evaluated using 30,000 random points indicates values of: 1.1 ± 1.3 mm on the x-axis, 0.7 ± 0.8 mm on the y-axis, and 0.9 ± 1.2 mm on the z-axis. This result was achieved in three iterations, starting from an average registration error on the three axes of 1.4 mm to reach 0.9 mm. The heatmap created to visualize the spatial distribution of the error shows how it is uniformly distributed over the surface of the pointcloud obtained with the RGBD camera, except for some areas of the nose and ears where the registration error tends to increase. The obtained results indicate that the proposed methodology seems effective. In addition, since the used RGBD camera is inexpensive, future approaches based on the simultaneous use of multiple cameras could further improve the results. Finally, the augmented reality visualization of the obtained result is innovative and could provide support in all those cases where the visualization of three-dimensional medical models is necessary. Full article
Show Figures

Figure 1

23 pages, 22143 KiB  
Article
Anthropological Comparative Analysis of CCTV Footage in a 3D Virtual Environment
by Krzysztof Maksymowicz, Aleksandra Kuzan, Łukasz Szleszkowski and Wojciech Tunikowski
Appl. Sci. 2023, 13(21), 11879; https://doi.org/10.3390/app132111879 - 30 Oct 2023
Cited by 2 | Viewed by 1793
Abstract
The image is a particularly valuable data carrier in medical forensic and forensic analyses. One of the analyses, as mentioned above, is to assess whether a graphically captured object is the same object examined in reality. This is a complicated process due to [...] Read more.
The image is a particularly valuable data carrier in medical forensic and forensic analyses. One of the analyses, as mentioned above, is to assess whether a graphically captured object is the same object examined in reality. This is a complicated process due to perspective foreshortening, making it difficult to determine the scale and proportion of objects in the frame, as well as the subsequent correct reading of their actual measurements. This paper presented a method for the 3D reconstruction of silhouettes of people recorded in a photo or video, with the aim of identifying these people through subsequent comparative studies. The authors presented an algorithm for dealing with graphic evidence, using the example of the analysis of spatial correlation of the silhouette of the perpetrator of the actual event (recorded via CCTV footage) with the silhouette of the suspect (scanned in 3D in custody). In this paper, the authors posed the thesis that the isometric (devoid of perspective foreshortening) display mode that 3D platforms offer, and the animation of the figure to the desired identical poses, provides the possibility of not only obtaining linear measurements of the person but also of orthophotographic visualization of body proportions, allowing their comparison with another silhouette, which is difficult to achieve in perspective view of the studied image. Full article
(This article belongs to the Special Issue Intelligent Digital Forensics and Cyber Security)
Show Figures

Figure 1

50 pages, 2531 KiB  
Systematic Review
Cognitive Assessment Based on Electroencephalography Analysis in Virtual and Augmented Reality Environments, Using Head Mounted Displays: A Systematic Review
by Foteini Gramouseni, Katerina D. Tzimourta, Pantelis Angelidis, Nikolaos Giannakeas and Markos G. Tsipouras
Big Data Cogn. Comput. 2023, 7(4), 163; https://doi.org/10.3390/bdcc7040163 - 13 Oct 2023
Cited by 6 | Viewed by 4768
Abstract
The objective of this systematic review centers on cognitive assessment based on electroencephalography (EEG) analysis in Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) environments, projected on Head Mounted Displays (HMD), in healthy individuals. A range of electronic databases were searched [...] Read more.
The objective of this systematic review centers on cognitive assessment based on electroencephalography (EEG) analysis in Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) environments, projected on Head Mounted Displays (HMD), in healthy individuals. A range of electronic databases were searched (Scopus, ScienceDirect, IEEE Explore and PubMed), using PRISMA research method and 82 experimental studies were included in the final report. Specific aspects of cognitive function were evaluated, including cognitive load, immersion, spatial awareness, interaction with the digital environment and attention. These were analyzed based on various aspects of the analysis, including the number of participants, stimuli, frequency bands range, data preprocessing and data analysis. Based on the analysis conducted, significant findings have emerged both in terms of the experimental structure related to cognitive neuroscience and the key parameters considered in the research. Also, numerous significant avenues and domains requiring more extensive exploration have been identified within neuroscience and cognition research in digital environments. These encompass factors such as the experimental setup, including issues like narrow participant populations and the feasibility of using EEG equipment with a limited number of sensors to overcome the challenges posed by the time-consuming placement of a multi-electrode EEG cap. There is a clear need for more in-depth exploration in signal analysis, especially concerning the α, β, and γ sub-bands and their role in providing more precise insights for evaluating cognitive states. Finally, further research into augmented and mixed reality environments will enable the extraction of more accurate conclusions regarding their utility in cognitive neuroscience. Full article
Show Figures

Figure 1

22 pages, 52251 KiB  
Article
SkyroadAR: An Augmented Reality System for UAVs Low-Altitude Public Air Route Visualization
by Junming Tan, Huping Ye, Chenchen Xu, Hongbo He and Xiaohan Liao
Drones 2023, 7(9), 587; https://doi.org/10.3390/drones7090587 - 19 Sep 2023
Cited by 4 | Viewed by 2159
Abstract
Augmented Reality (AR) technology visualizes virtual objects in the real environment, offering users an immersive experience that enhances their spatial perception of virtual objects. This makes AR an important tool for visualization in engineering, education, and gaming. The Unmanned Aerial Vehicles’ (UAVs’) low-altitude [...] Read more.
Augmented Reality (AR) technology visualizes virtual objects in the real environment, offering users an immersive experience that enhances their spatial perception of virtual objects. This makes AR an important tool for visualization in engineering, education, and gaming. The Unmanned Aerial Vehicles’ (UAVs’) low-altitude public air route (Skyroad) is a forward-looking virtual transportation infrastructure flying over complex terrain, presenting challenges for user perception due to its invisibility. In order to achieve a 3D and intuitive visualization of Skyroad, this paper proposes an AR visualization framework based on a physical sandbox. The framework consists of four processes: reconstructing and 3D-printing a sandbox model, producing virtual scenes for UAVs Skyroad, implementing a markerless registration and tracking method, and displaying Skyroad scenes on the sandbox with GPU-based occlusion handling. With the support of the framework, a mobile application called SkyroadAR was developed. System performance tests and user questionnaires were conducted on SkyroadAR; the results showed that our approachs to tracking and occlusion provided an efficient and stable AR effect for Skyroad. This intuitive visualization is recognized by both professional and non-professional users. Full article
Show Figures

Figure 1

14 pages, 5113 KiB  
Article
Initial Structure Design and Optimization of an Automotive Remote Head-Up Display
by Yu Ye, Huaixin Chen and Zhixi Wang
Appl. Sci. 2023, 13(17), 9649; https://doi.org/10.3390/app13179649 - 25 Aug 2023
Cited by 1 | Viewed by 1756
Abstract
Aiming at the problems of difficult construction and occlusion of the initial structure of an automotive augmented reality head-up display (AR-HUD), a method to quickly build the initial structure of an automotive AR-HUD is proposed. Firstly, the position of the mirrors in the [...] Read more.
Aiming at the problems of difficult construction and occlusion of the initial structure of an automotive augmented reality head-up display (AR-HUD), a method to quickly build the initial structure of an automotive AR-HUD is proposed. Firstly, the position of the mirrors in the initial structure is calculated based on the Rodrigues rotation formula. Secondly, the position of the mirrors is restricted by constraints during optimization to prevent the problem of structural occlusion. Finally, a virtual image display system with a visual distance of 7.5 m and a field of view angle of 10° × 5° is designed. The image quality analysis of the optimized system shows that the light spots in each field of view are all within Airy spots. At the spatial cutoff frequency of the virtual image plane of the optical system, the modulation transfer function (MTF) value of the full field of view is basically greater than 0.5, and the distortion is less than 1%. Finally, using the image for simulation, the results of the simulation image are satisfying, which proves the validity and feasibility of the structural design. It provides a useful reference for the structural design of the remote head-up display system. Full article
(This article belongs to the Special Issue Applied Optics and Vision Science)
Show Figures

Figure 1

18 pages, 14858 KiB  
Article
Augmented Reality Surgical Navigation System Integrated with Deep Learning
by Shin-Yan Chiou, Li-Sheng Liu, Chia-Wei Lee, Dong-Hyun Kim, Mohammed A. Al-masni, Hao-Li Liu, Kuo-Chen Wei, Jiun-Lin Yan and Pin-Yuan Chen
Bioengineering 2023, 10(5), 617; https://doi.org/10.3390/bioengineering10050617 - 20 May 2023
Cited by 8 | Viewed by 5742
Abstract
Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented [...] Read more.
Most current surgical navigation methods rely on optical navigators with images displayed on an external screen. However, minimizing distractions during surgery is critical and the spatial information displayed in this arrangement is non-intuitive. Previous studies have proposed combining optical navigation systems with augmented reality (AR) to provide surgeons with intuitive imaging during surgery, through the use of planar and three-dimensional imagery. However, these studies have mainly focused on visual aids and have paid relatively little attention to real surgical guidance aids. Moreover, the use of augmented reality reduces system stability and accuracy, and optical navigation systems are costly. Therefore, this paper proposed an augmented reality surgical navigation system based on image positioning that achieves the desired system advantages with low cost, high stability, and high accuracy. This system also provides intuitive guidance for the surgical target point, entry point, and trajectory. Once the surgeon uses the navigation stick to indicate the position of the surgical entry point, the connection between the surgical target and the surgical entry point is immediately displayed on the AR device (tablet or HoloLens glasses), and a dynamic auxiliary line is shown to assist with incision angle and depth. Clinical trials were conducted for EVD (extra-ventricular drainage) surgery, and surgeons confirmed the system’s overall benefit. A “virtual object automatic scanning” method is proposed to achieve a high accuracy of 1 ± 0.1 mm for the AR-based system. Furthermore, a deep learning-based U-Net segmentation network is incorporated to enable automatic identification of the hydrocephalus location by the system. The system achieves improved recognition accuracy, sensitivity, and specificity of 99.93%, 93.85%, and 95.73%, respectively, representing a significant improvement from previous studies. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnosis and Prognosis)
Show Figures

Graphical abstract

25 pages, 2217 KiB  
Article
Pedestrian Augmented Reality Navigator
by Tanmaya Mahapatra, Nikolaos Tsiamitros, Anton Moritz Rohr, Kailashnath K and Georgios Pipelidis
Sensors 2023, 23(4), 1816; https://doi.org/10.3390/s23041816 - 6 Feb 2023
Cited by 2 | Viewed by 2447
Abstract
Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to use and, therefore, do not offer a satisfactory user experience for the mass market yet. However, the [...] Read more.
Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to use and, therefore, do not offer a satisfactory user experience for the mass market yet. However, the latest-generation smartphones offer AR capabilities out of the box, with sometimes even pre-installed apps. Apple’s framework ARKit is available on iOS devices, free to use for developers. Android similarly features a counterpart, ARCore. Both systems work well for small spatially confined applications, but lack global positional awareness. This is a direct result of one limitation in current mobile technology. Global Navigation Satellite Systems (GNSSs) are relatively inaccurate and often cannot work indoors due to the restriction of the signal to penetrate through solid objects, such as walls. In this paper, we present the Pedestrian Augmented Reality Navigator (PAReNt) iOS app as a solution to this problem. The app implements a data fusion technique to increase accuracy in global positioning and showcases AR navigation as one use case for the improved data. ARKit provides data about the smartphone’s motion, which is fused with GNSS data and a Bluetooth indoor positioning system via a Kalman Filter (KF). Four different KFs with different underlying models have been implemented and independently evaluated to find the best filter. The evaluation measures the app’s accuracy against a ground truth under controlled circumstances. Two main testing methods were introduced and applied to determine which KF works best. Depending on the evaluation method, this novel approach improved the accuracy by 57% (when GPS and AR were used) or 32% (when Bluetooth and AR were used) over the raw sensor data. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 7427 KiB  
Article
High Resolution Multiview Holographic Display Based on the Holographic Optical Element
by Xiujuan Qin, Xinzhu Sang, Hui Li, Rui Xiao, Chongli Zhong, Binbin Yan, Zhi Sun and Yu Dong
Micromachines 2023, 14(1), 147; https://doi.org/10.3390/mi14010147 - 6 Jan 2023
Cited by 4 | Viewed by 3240
Abstract
Limited by the low space-bandwidth product of the spatial light modulator (SLM), it is difficult to realize multiview holographic three-dimensional (3D) display. To conquer the problem, a method based on the holographic optical element (HOE), which is regarded as a controlled light element, [...] Read more.
Limited by the low space-bandwidth product of the spatial light modulator (SLM), it is difficult to realize multiview holographic three-dimensional (3D) display. To conquer the problem, a method based on the holographic optical element (HOE), which is regarded as a controlled light element, is proposed in the study. The SLM is employed to upload the synthetic phase-only hologram generated by the angular spectrum diffraction theory. Digital grating is introduced in the generation process of the hologram to achieve the splicing of the reconstructions and adjust the position of the reconstructions. The HOE fabricated by the computer-generated hologram printing can redirect the reconstructed images of multiview into multiple viewing zones. Thus, the modulation function of the HOE should be well-designed to avoid crosstalk between perspectives. The experimental results show that the proposed system can achieve multiview holographic augmented reality (AR) 3D display without crosstalk. The resolution of each perspective is 4K, which is higher than that of the existing multiview 3D display system. Full article
(This article belongs to the Special Issue Three-Dimensional Display Technologies)
Show Figures

Figure 1

11 pages, 3345 KiB  
Article
Spatiotemporal Thermal Control Effects on Thermal Grill Illusion
by Satoshi Saga, Ryotaro Kimoto and Kaede Kaguchi
Sensors 2023, 23(1), 414; https://doi.org/10.3390/s23010414 - 30 Dec 2022
Cited by 5 | Viewed by 2938
Abstract
The thermal grill illusion induces a pain sensation under a spatial display of warmth and coolness of approximately 40 °C; and 20 °C. To realize virtual pain display more universally during the virtual reality experience, we proposed a spatiotemporal control method to realize [...] Read more.
The thermal grill illusion induces a pain sensation under a spatial display of warmth and coolness of approximately 40 °C; and 20 °C. To realize virtual pain display more universally during the virtual reality experience, we proposed a spatiotemporal control method to realize a variable thermal grill illusion and evaluated the effect of the method. First, we examined whether there was a change in the period until pain occurred due to the spatial temperature distribution of pre-warming and pre-cooling and verified whether the period until pain occurred became shorter as the temperature difference between pre-warming and pre-cooling increased. Next, we examined the effect of the number of grids on the illusion and verified the following facts. In terms of the pain area, the larger the thermal area, the larger the pain area. In terms of the magnitude of the pain, the larger the thermal area, the greater the magnitude of the sensation of pain. Full article
(This article belongs to the Special Issue Advanced Tactile Sensors)
Show Figures

Figure 1

Back to TopTop