Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = ARKit

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2353 KB  
Article
ARLO: Augmented Reality Localization Optimization for Real-Time Pose Estimation and Human–Computer Interaction
by Meng Xu, Qiqi Shu, Zhao Huang, Guang Chen and Stefan Poslad
Electronics 2025, 14(7), 1478; https://doi.org/10.3390/electronics14071478 - 7 Apr 2025
Cited by 3 | Viewed by 944
Abstract
Accurate and real-time outdoor localization and pose estimation are critical for various applications, including navigation, robotics, and augmented reality. Apple’s ARKit, a leading AR platform, employs visual–inertial odometry (VIO) and simultaneous localization and mapping (SLAM) algorithms to enable localization and pose estimation. However, [...] Read more.
Accurate and real-time outdoor localization and pose estimation are critical for various applications, including navigation, robotics, and augmented reality. Apple’s ARKit, a leading AR platform, employs visual–inertial odometry (VIO) and simultaneous localization and mapping (SLAM) algorithms to enable localization and pose estimation. However, ARKit-based systems face positional bias when the device’s camera is obscured, a frequent issue in dynamic or crowded environments. This paper presents a novel approach to mitigate this limitation by integrating position bias correction, context-aware localization, and human–computer interaction techniques into a cohesive interactive module group. The proposed system includes a navigation module, a positioning module, and a front-end rendering module that collaboratively optimize ARKit’s localization accuracy. Comprehensive evaluation across a variety of outdoor environments demonstrates the approach’s effectiveness in improving localization precision. This work contributes to enhancing ARKit-based systems, particularly in scenarios with limited visual input, thereby improving user experience and expanding the potential for outdoor localization applications. Experimental evaluations show that our method improves localization accuracy by up to 92.9% and reduces average positional error by more than 85% compared with baseline ARKit in occluded or crowded outdoor environments. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 18990 KB  
Article
Using Virtual and Augmented Reality with GIS Data
by Karel Pavelka and Martin Landa
ISPRS Int. J. Geo-Inf. 2024, 13(7), 241; https://doi.org/10.3390/ijgi13070241 - 5 Jul 2024
Cited by 9 | Viewed by 5769
Abstract
This study explores how combining virtual reality (VR) and augmented reality (AR) with geographic information systems (GIS) revolutionizes data visualization. It traces the historical development of these technologies and highlights key milestones that paved the way for this study’s objectives. While existing platforms [...] Read more.
This study explores how combining virtual reality (VR) and augmented reality (AR) with geographic information systems (GIS) revolutionizes data visualization. It traces the historical development of these technologies and highlights key milestones that paved the way for this study’s objectives. While existing platforms like Esri’s software and Google Earth VR show promise, they lack complete integration for immersive GIS visualization. This gap has led to the need for a dedicated workflow to integrate selected GIS data into a game engine for visualization purposes. This study primarily utilizes QGIS for data preparation and Unreal Engine for immersive visualization. QGIS handles data management, while Unreal Engine offers advanced rendering and interactivity for immersive experiences. To tackle the challenge of handling extensive GIS datasets, this study proposes a workflow involving tiling, digital elevation model generation, and transforming GeoTIFF data into 3D objects. Leveraging QGIS and Three.js streamlines the conversion process for integration into Unreal Engine. The resultant virtual reality application features distinct stations, enabling users to navigate, visualize, compare, and animate GIS data effectively. Each station caters to specific functionalities, ensuring a seamless and informative experience within the VR environment. This study also delves into augmented reality applications, adapting methodologies to address hardware limitations for smoother user experiences. By optimizing textures and implementing augmented reality functionalities through modules Swift, RealityKit, and ARKit, this study extends the immersive GIS experience to iOS devices. In conclusion, this research demonstrates the potential of integrating virtual reality, augmented reality, and GIS, pushing data visualization into new realms. The innovative workflows and applications developed serve as a testament to the evolving landscape of spatial data interpretation and engagement. Full article
Show Figures

Figure 1

21 pages, 4639 KB  
Article
WebAR as a Mediation Tool Focused on Reading and Understanding of Technical Drawings Regarding Tailor-Made Projects for the Scenographic Industry
by José Eduardo Araújo Lôbo, Walter Franklin Marques Correia, João Marcelo Teixeira, José Edeson de Melo Siqueira and Rafael Alves Roberto
Appl. Sci. 2023, 13(22), 12295; https://doi.org/10.3390/app132212295 - 14 Nov 2023
Viewed by 1564
Abstract
Among the leading immersive technologies, augmented reality is one of the most promising and empowering for supporting designers in production environments. This research investigates the application of mobile augmented reality, based on the Web, as a mediation tool focused on cognitive activities of [...] Read more.
Among the leading immersive technologies, augmented reality is one of the most promising and empowering for supporting designers in production environments. This research investigates the application of mobile augmented reality, based on the Web, as a mediation tool focused on cognitive activities of reading and understanding of technical drawings in the production and assembly of tailor-made projects of the scenographic industry. In this context, the research presents a method to use WebAR to improve the reading of technical drawings, seeking efficiency in the visualization of models and the exchange of information between professionals involved in the processes of design, production, and assembly of products, in the scope of scenography. This mediation tool was developed using Web AR platforms, compatible with native libraries (ARCore and ARKit) to ensure, first, compatibility with commonly used devices that workers or businesses can access, and second, to leverage hybrid tracking techniques that combine vision and sensors to improve the reliability of augmented reality viewing. The proposed solution adopts multiple tracking and navigation techniques in order to expand Space Skills components to provide greater exploratory freedom to users. The research process took place in light of the Design Science Research Methodology and the DSR-Model, since it aimed to develop a solution to a practical problem, as well as to produce knowledge from this process. Field experiments were conducted in two real companies, with end users on their respective mobile devices, in order to evaluate usability and behavioral intent, through the Acceptance, Intent, and Use of Technology questionnaires and perceived mental workload, NASA-TLX. The experimental results show that the adoption of this tool reduces the cognitive load in the process of reading technical drawings and project understanding. In general, its usability and intent to use provided significant levels of satisfaction, being positively accepted by all participants involved in the study. Full article
(This article belongs to the Special Issue Applications of Virtual, Augmented, and Mixed Reality - 2nd Volume)
Show Figures

Figure 1

18 pages, 6595 KB  
Article
Towards Preventing Gaps in Health Care Systems through Smartphone Use: Analysis of ARKit for Accurate Measurement of Facial Distances in Different Angles
by Leon Nissen, Julia Hübner, Jens Klinker, Maximilian Kapsecker, Alexander Leube, Max Schneckenburger and Stephan M. Jonas
Sensors 2023, 23(9), 4486; https://doi.org/10.3390/s23094486 - 5 May 2023
Cited by 6 | Viewed by 3774
Abstract
There is a growing consensus in the global health community that the use of communication technologies will be an essential factor in ensuring universal health coverage of the world’s population. New technologies can only be used profitably if their accuracy is sufficient. Therefore, [...] Read more.
There is a growing consensus in the global health community that the use of communication technologies will be an essential factor in ensuring universal health coverage of the world’s population. New technologies can only be used profitably if their accuracy is sufficient. Therefore, we explore the feasibility of using Apple’s ARKit technology to accurately measure the distance from the user’s eye to their smartphone screen. We developed an iOS application for measuring eyes-to-phone distances in various angles, using the built-in front-facing-camera and TrueDepth sensor. The actual position of the phone is precisely controlled and recorded, by fixing the head position and placing the phone in a robotic arm. Our results indicate that ARKit is capable of producing accurate measurements, with overall errors ranging between 0.88% and 9.07% from the actual distance, across various head positions. The accuracy of ARKit may be impacted by several factors such as head size, position, device model, and temperature. Our findings suggest that ARKit is a useful tool in the development of applications aimed at preventing eye damage caused by smartphone use. Full article
(This article belongs to the Special Issue Smart Sensing Systems for Health Monitoring)
Show Figures

Figure 1

25 pages, 2217 KB  
Article
Pedestrian Augmented Reality Navigator
by Tanmaya Mahapatra, Nikolaos Tsiamitros, Anton Moritz Rohr, Kailashnath K and Georgios Pipelidis
Sensors 2023, 23(4), 1816; https://doi.org/10.3390/s23041816 - 6 Feb 2023
Cited by 5 | Viewed by 2895
Abstract
Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to use and, therefore, do not offer a satisfactory user experience for the mass market yet. However, the [...] Read more.
Navigation is often regarded as one of the most-exciting use cases for Augmented Reality (AR). Current AR Head-Mounted Displays (HMDs) are rather bulky and cumbersome to use and, therefore, do not offer a satisfactory user experience for the mass market yet. However, the latest-generation smartphones offer AR capabilities out of the box, with sometimes even pre-installed apps. Apple’s framework ARKit is available on iOS devices, free to use for developers. Android similarly features a counterpart, ARCore. Both systems work well for small spatially confined applications, but lack global positional awareness. This is a direct result of one limitation in current mobile technology. Global Navigation Satellite Systems (GNSSs) are relatively inaccurate and often cannot work indoors due to the restriction of the signal to penetrate through solid objects, such as walls. In this paper, we present the Pedestrian Augmented Reality Navigator (PAReNt) iOS app as a solution to this problem. The app implements a data fusion technique to increase accuracy in global positioning and showcases AR navigation as one use case for the improved data. ARKit provides data about the smartphone’s motion, which is fused with GNSS data and a Bluetooth indoor positioning system via a Kalman Filter (KF). Four different KFs with different underlying models have been implemented and independently evaluated to find the best filter. The evaluation measures the app’s accuracy against a ground truth under controlled circumstances. Two main testing methods were introduced and applied to determine which KF works best. Depending on the evaluation method, this novel approach improved the accuracy by 57% (when GPS and AR were used) or 32% (when Bluetooth and AR were used) over the raw sensor data. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

14 pages, 7439 KB  
Article
A Benchmark Comparison of Four Off-the-Shelf Proprietary Visual–Inertial Odometry Systems
by Pyojin Kim, Jungha Kim, Minkyeong Song, Yeoeun Lee, Moonkyeong Jung and Hyeong-Geun Kim
Sensors 2022, 22(24), 9873; https://doi.org/10.3390/s22249873 - 15 Dec 2022
Cited by 12 | Viewed by 4732
Abstract
Commercial visual–inertial odometry (VIO) systems have been gaining attention as cost-effective, off-the-shelf, six-degree-of-freedom (6-DoF) ego-motion-tracking sensors for estimating accurate and consistent camera pose data, in addition to their ability to operate without external localization from motion capture or global positioning systems. It is [...] Read more.
Commercial visual–inertial odometry (VIO) systems have been gaining attention as cost-effective, off-the-shelf, six-degree-of-freedom (6-DoF) ego-motion-tracking sensors for estimating accurate and consistent camera pose data, in addition to their ability to operate without external localization from motion capture or global positioning systems. It is unclear from existing results, however, which commercial VIO platforms are the most stable, consistent, and accurate in terms of state estimation for indoor and outdoor robotic applications. We assessed four popular proprietary VIO systems (Apple ARKit, Google ARCore, Intel RealSense T265, and Stereolabs ZED 2) through a series of both indoor and outdoor experiments in which we showed their positioning stability, consistency, and accuracy. After evaluating four popular VIO sensors in challenging real-world indoor and outdoor scenarios, Apple ARKit showed the most stable and high accuracy/consistency, and the relative pose error was a drift error of about 0.02 m per second. We present our complete results as a benchmark comparison for the research community. Full article
(This article belongs to the Special Issue Sensors for Navigation and Control Systems)
Show Figures

Figure 1

17 pages, 677 KB  
Article
Deep Mobile Linguistic Therapy for Patients with ASD
by Ari Ernesto Ortiz Castellanos, Chuan-Ming Liu and Chongyang Shi
Int. J. Environ. Res. Public Health 2022, 19(19), 12857; https://doi.org/10.3390/ijerph191912857 - 7 Oct 2022
Cited by 4 | Viewed by 2511
Abstract
Autistic spectrum disorder (ASD) is one of the most complex groups of neurobehavioral and developmental conditions. The reason is the presence of three different impaired domains, such as social interaction, communication, and restricted repetitive behaviors. Some children with ASD may not be able [...] Read more.
Autistic spectrum disorder (ASD) is one of the most complex groups of neurobehavioral and developmental conditions. The reason is the presence of three different impaired domains, such as social interaction, communication, and restricted repetitive behaviors. Some children with ASD may not be able to communicate using language or speech. Many experts propose that continued therapy in the form of software training in this area might help to bring improvement. In this work, we propose a design of software speech therapy system for ASD. We combined different devices, technologies, and features with techniques of home rehabilitation. We used TensorFlow for Image Classification, ArKit for Text-to-Speech, Cloud Database, Binary Search, Natural Language Processing, Dataset of Sentences, and Dataset of Images with two different Operating Systems designed for Smart Mobile devices in daily life. This software is a combination of different Deep Learning Technologies and makes Human–Computer Interaction Therapy very easy to conduct. In addition, we explain the way these were connected and put to work together. Additionally, we explain in detail the architecture of software and how each component works together as an integrated Therapy System. Finally, it allows the patient with ASD to perform the therapy anytime and everywhere, as well as transmitting information to a medical specialist. Full article
(This article belongs to the Special Issue 3rd Edition of Big Data, Decision Models, and Public Health)
Show Figures

Figure 1

17 pages, 40659 KB  
Article
Benchmarking Built-In Tracking Systems for Indoor AR Applications on Popular Mobile Devices
by Emanuele Marino, Fabio Bruno, Loris Barbieri and Antonio Lagudi
Sensors 2022, 22(14), 5382; https://doi.org/10.3390/s22145382 - 19 Jul 2022
Cited by 11 | Viewed by 6306
Abstract
As one of the most promising technologies for next-generation mobile platforms, Augmented Reality (AR) has the potential to radically change the way users interact with real environments enriched with various digital information. To achieve this potential, it is of fundamental importance to track [...] Read more.
As one of the most promising technologies for next-generation mobile platforms, Augmented Reality (AR) has the potential to radically change the way users interact with real environments enriched with various digital information. To achieve this potential, it is of fundamental importance to track and maintain accurate registration between real and computer-generated objects. Thus, it is crucially important to assess tracking capabilities. In this paper, we present a benchmark evaluation of the tracking performances of some of the most popular AR handheld devices, which can be regarded as a representative set of devices for sale in the global market. In particular, eight different next-gen devices including smartphones and tablets were considered. Experiments were conducted in a laboratory by adopting an external tracking system. The experimental methodology consisted of three main stages: calibration, data acquisition, and data evaluation. The results of the experimentation showed that the selected devices, in combination with the AR SDKs, have different tracking performances depending on the covered trajectory. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

29 pages, 26009 KB  
Article
Evaluating 3D Human Motion Capture on Mobile Devices
by Lara Marie Reimer, Maximilian Kapsecker, Takashi Fukushima and Stephan M. Jonas
Appl. Sci. 2022, 12(10), 4806; https://doi.org/10.3390/app12104806 - 10 May 2022
Cited by 12 | Viewed by 9566
Abstract
Computer-vision-based frameworks enable markerless human motion capture on consumer-grade devices in real-time. They open up new possibilities for application, such as in the health and medical sector. So far, research on mobile solutions has been focused on 2-dimensional motion capture frameworks. 2D motion [...] Read more.
Computer-vision-based frameworks enable markerless human motion capture on consumer-grade devices in real-time. They open up new possibilities for application, such as in the health and medical sector. So far, research on mobile solutions has been focused on 2-dimensional motion capture frameworks. 2D motion analysis is limited by the viewing angle of the positioned camera. New frameworks enable 3-dimensional human motion capture and can be supported through additional smartphone sensors such as LiDAR. 3D motion capture promises to overcome the limitations of 2D frameworks by considering all three movement planes independent of the camera angle. In this study, we performed a laboratory experiment with ten subjects, comparing the joint angles in eight different body-weight exercises tracked by Apple ARKit, a mobile 3D motion capture framework, against a gold-standard system for motion capture: the Vicon system. The 3D motion capture framework exposed a weighted Mean Absolute Error of 18.80° ± 12.12° (ranging from 3.75° ± 0.99° to 47.06° ± 5.11° per tracked joint angle and exercise) and a Mean Spearman Rank Correlation Coefficient of 0.76 for the whole data set. The data set shows a high variance of those two metrics between the observed angles and performed exercises. The observed accuracy is influenced by the visibility of the joints and the observed motion. While the 3D motion capture framework is a promising technology that could enable several use cases in the entertainment, health, and medical area, its limitations should be considered for each potential application area. Full article
(This article belongs to the Special Issue Applied Biomechanics and Motion Analysis)
Show Figures

Figure 1

23 pages, 7135 KB  
Review
A Bibliometric Narrative Review on Modern Navigation Aids for People with Visual Impairment
by Xiaochen Zhang, Xiaoyu Yao, Lanxin Hui, Fuchuan Song and Fei Hu
Sustainability 2021, 13(16), 8795; https://doi.org/10.3390/su13168795 - 6 Aug 2021
Cited by 5 | Viewed by 3942
Abstract
The innovations in the field of specialized navigation systems have become prominent research topics. As an applied science for people with special needs, navigation aids for the visually impaired are a key sociotechnique that helps users to independently navigate and access needed resources [...] Read more.
The innovations in the field of specialized navigation systems have become prominent research topics. As an applied science for people with special needs, navigation aids for the visually impaired are a key sociotechnique that helps users to independently navigate and access needed resources indoors and outdoors. This paper adopts the informetric analysis method to assess the current research and explore trends in navigation systems for the visually impaired based on bibliographic records retrieved from the Web of Science Core Collection (WoSCC). A total of 528 relevant publications from 2010 to 2020 were analyzed. This work answers the following questions: What are the publication characteristics and most influential publication sources? Who are the most active and influential authors? What are their research interests and primary contributions to society? What are the featured key studies in the field? What are the most popular topics and research trends, described by keywords? Additionally, we closely investigate renowned works that use different multisensor fusion methods, which are believed to be the bases of upcoming research. The key findings of this work aim to help upcoming researchers quickly move into the field, as they can easily grasp the frontiers and the trend of R&D in the research area. Moreover, we suggest the researchers embrace smartphone-based agile development, as well as pay more attention to phone-based prominent frameworks such as ARCore or ARKit, to achieve a fast prototyping for their proposed systems. This study also provides references for the associated fellows by highlighting the critical junctures of the modern assistive travel aids for people with visual impairments. Full article
(This article belongs to the Special Issue Critical Junctures in Assistive Technology and Disability Inclusion)
Show Figures

Figure 1

19 pages, 6860 KB  
Article
Augmented Reality and Machine Learning Incorporation Using YOLOv3 and ARKit
by Huy Le, Minh Nguyen, Wei Qi Yan and Hoa Nguyen
Appl. Sci. 2021, 11(13), 6006; https://doi.org/10.3390/app11136006 - 28 Jun 2021
Cited by 22 | Viewed by 8189
Abstract
Augmented reality is one of the fastest growing fields, receiving increased funding for the last few years as people realise the potential benefits of rendering virtual information in the real world. Most of today’s augmented reality marker-based applications use local feature detection and [...] Read more.
Augmented reality is one of the fastest growing fields, receiving increased funding for the last few years as people realise the potential benefits of rendering virtual information in the real world. Most of today’s augmented reality marker-based applications use local feature detection and tracking techniques. The disadvantage of applying these techniques is that the markers must be modified to match the unique classified algorithms or they suffer from low detection accuracy. Machine learning is an ideal solution to overcome the current drawbacks of image processing in augmented reality applications. However, traditional data annotation requires extensive time and labour, as it is usually done manually. This study incorporates machine learning to detect and track augmented reality marker targets in an application using deep neural networks. We firstly implement the auto-generated dataset tool, which is used for the machine learning dataset preparation. The final iOS prototype application incorporates object detection, object tracking and augmented reality. The machine learning model is trained to recognise the differences between targets using one of YOLO’s most well-known object detection methods. The final product makes use of a valuable toolkit for developing augmented reality applications called ARKit. Full article
(This article belongs to the Collection Virtual and Augmented Reality Systems)
Show Figures

Figure 1

15 pages, 1365 KB  
Article
A Navigation and Augmented Reality System for Visually Impaired People
by Alice Lo Valvo, Daniele Croce, Domenico Garlisi, Fabrizio Giuliano, Laura Giarré and Ilenia Tinnirello
Sensors 2021, 21(9), 3061; https://doi.org/10.3390/s21093061 - 28 Apr 2021
Cited by 37 | Viewed by 15170
Abstract
In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high [...] Read more.
In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 3876 KB  
Article
Design and Evaluation of a Web- and Mobile-Based Binaural Audio Platform for Cultural Heritage
by Marco Comunità, Andrea Gerino, Veranika Lim and Lorenzo Picinali
Appl. Sci. 2021, 11(4), 1540; https://doi.org/10.3390/app11041540 - 8 Feb 2021
Cited by 9 | Viewed by 3911
Abstract
PlugSonic is a suite of web- and mobile-based applications for the curation and experience of 3D interactive soundscapes and sonic narratives in the cultural heritage context. It was developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and [...] Read more.
PlugSonic is a suite of web- and mobile-based applications for the curation and experience of 3D interactive soundscapes and sonic narratives in the cultural heritage context. It was developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation) and consists of two main applications: PlugSonic Sample, to edit and apply audio effects, and PlugSonic Soundscape, to create and experience 3D soundscapes for headphones playback. The audio processing within PlugSonic is based on the Web Audio API and the 3D Tune-In Toolkit, while the mobile exploration of soundscapes in a physical space is obtained using Apple’s ARKit. The main goal of PlugSonic is technology democratisation; PlugSonic users—whether cultural institutions or citizens—are all given the instruments needed to create, process and experience 3D soundscapes and sonic narratives; without the need for specific devices, external tools (software and/or hardware), specialised knowledge or custom development. The aims of this paper are to present the design and development choices, the user involvement processes as well as a final evaluation conducted with inexperienced users on three tasks (creation, curation and experience), demonstrating how PlugSonic is indeed a simple, effective, yet powerful tool. Full article
Show Figures

Graphical abstract

15 pages, 10047 KB  
Article
Reconstructive Archaeology: In Situ Visualisation of Previously Excavated Finds and Features through an Ongoing Mixed Reality Process
by Miguel Angel Dilena and Marie Soressi
Appl. Sci. 2020, 10(21), 7803; https://doi.org/10.3390/app10217803 - 3 Nov 2020
Cited by 7 | Viewed by 4370
Abstract
Archaeological excavation is a demolishing process. Rather few elements outlast extractive operations. Therefore, it is hard to visualise the precise location of unearthed finds at a previously excavated research area. Here, we present a mixed reality environment that displays in situ 3D models [...] Read more.
Archaeological excavation is a demolishing process. Rather few elements outlast extractive operations. Therefore, it is hard to visualise the precise location of unearthed finds at a previously excavated research area. Here, we present a mixed reality environment that displays in situ 3D models of features that were formerly extracted and recorded with 3D coordinates during unearthing operations. We created a tablet application that allows the user to view the position, orientation and dimensions of every recorded find while freely moving around the archaeological site with the device. To anchor the model, we used physical landmarks left at the excavation. A series of customised forms were created to show (onscreen) the different types of features by superimposing them over the terrain as perceived by the tablet camera. The application permits zooming-in, zooming-out, querying for specific artefacts and reading metadata associated with the archaeological elements. When at the office, our environment enables accurate visualisations of the 3D geometry concerning previously unearthed features and their spatial relationships. The application operates using the Swift programming language, Python scripts and ARKit technology. We present here an example of its use at Les Cottés, France, a palaeolithic site where thousands of artefacts are excavated out of six superimposed layers with a complex conformation. Full article
(This article belongs to the Special Issue 3D Virtual Reconstruction for Archaeological Sites)
Show Figures

Figure 1

17 pages, 5890 KB  
Article
AR Book-Finding Behavior of Users in Library Venue
by Chun-I Lee, Fu-Ren Xiao and Yi-Wen Hsu
Appl. Sci. 2020, 10(20), 7349; https://doi.org/10.3390/app10207349 - 20 Oct 2020
Cited by 5 | Viewed by 3853
Abstract
ARKit and ARCore, key technologies in recent augmented reality (AR) development, have allowed AR to become more integrated in our lives. However, how effective AR is in an auxiliary role in venue guidance and how to collect the actual behaviors of users in [...] Read more.
ARKit and ARCore, key technologies in recent augmented reality (AR) development, have allowed AR to become more integrated in our lives. However, how effective AR is in an auxiliary role in venue guidance and how to collect the actual behaviors of users in physical venues are worth exploring. This study used navAR, a spatial behavior analysis system app that our research team developed, to collect the actual behaviors of participants in physical space via a smartphone, such as time, distance travelled, and trajectory, and compared their book-finding behaviors in a library venue in a text scenario and an AR scenario without any additional sensors or cameras. The experiment results revealed that (1) AR targets made a significant difference in book search time, and the participants found some of the books significantly faster; (2) the participants presented no significant differences in distance travelled; (3) with an AR target, the book-finding trajectories of the participants were significantly more regular; (4) the AR guidance system had good usability. The results of this study can facilitate planning with AR in indoor venue routes, improve venue and exhibition tour experiences, and enable AR to be used for crowd flow diversion. Furthermore, this study provides a methodology for future analyses on user behavior in physical spaces. Full article
(This article belongs to the Special Issue Extended Reality: From Theory to Applications)
Show Figures

Figure 1

Back to TopTop