Next Article in Journal
Two-Dimensional Direction-of-Arrival Estimation Using Direct Data Processing Approach in Directional Frequency Analysis and Recording (DIFAR) Sonobuoy
Previous Article in Journal
An Improved Weighted Cross-Entropy-Based Convolutional Neural Network for Auxiliary Diagnosis of Pneumonia
Previous Article in Special Issue
Unravelling Virtual Realities—Gamers’ Perceptions of the Metaverse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance Investigations of VSLAM and Google Street View Integration in Outdoor Location-Based Augmented Reality under Various Lighting Conditions

by
Komang Candra Brata
1,2,*,
Nobuo Funabiki
1,*,
Prismahardi Aji Riyantoko
1,
Yohanes Yohanie Fridelin Panduman
1 and
Mustika Mentari
1
1
Graduate School of Natural Science and Technology, Okayama University, Okayama 700-8530, Japan
2
Department of Informatics Engineering, Universitas Brawijaya, Malang 65145, Indonesia
*
Authors to whom correspondence should be addressed.
Electronics 2024, 13(15), 2930; https://doi.org/10.3390/electronics13152930
Submission received: 6 June 2024 / Revised: 12 July 2024 / Accepted: 20 July 2024 / Published: 24 July 2024
(This article belongs to the Special Issue Perception and Interaction in Mixed, Augmented, and Virtual Reality)

Abstract

:
The growing demand for Location-based Augmented Reality (LAR) experiences has driven the integration of Visual Simultaneous Localization And Mapping (VSLAM) with Google Street View (GSV) to enhance the accuracy. However, the impact of the ambient light intensity on the accuracy and reliability is underexplored, posing significant challenges in outdoor LAR implementations. This paper investigates the impact of light conditions on the accuracy and reliability of the VSLAM/GSV integration approach in outdoor LAR implementations. This study fills a gap in the current literature and offers valuable insights into vision-based approach implementation under different light conditions. Extensive experiments were conducted at five Point of Interest (POI) locations under various light conditions with a total of 100 datasets. Descriptive statistic methods were employed to analyze the data and assess the performance variation. Additionally, the Analysis of Variance (ANOVA) analysis was utilized to assess the impact of different light conditions on the accuracy metric and horizontal tracking time, determining whether there are significant differences in performance across varying levels of light intensity. The experimental results revealed that a significant correlation (p < 0.05 ) exists between the ambient light intensity and the accuracy of the VSLAM/GSV integration approach. Through the confidence interval estimation, the minimum illuminance 434   lx is needed to provide a feasible and consistent accuracy. Variations in visual references, such as wet surfaces in the rainy season, also impact the horizontal tracking time and accuracy.

1. Introduction

The anchor positioning method plays a vital role in Location-Based Augmented Reality (LAR) implementation. Inaccurate Augmented Reality (AR) anchor point information could be confusing to users, resulting in poor user experiences [1]. There exist two main approaches to implementing the LAR system: a sensor fusion approach and a vision-based approach [2,3,4,5].
The sensor fusion approach commonly relies on Global Positioning System (GPS) and Inertial Measurement Unit (IMU) sensors to determine the user dynamics location and use this information to generate AR information [6,7]. Although the sensor fusion approach is relatively fast and easy to implement, GPS sensors and compasses have limited accuracy. Hence, the accuracy often falls short of providing seamless and immersive LAR experiences, especially in dynamic outdoor environments [8].
The vision-based approach as an alternative, utilizes available visual data and computer vision techniques to provide some black-box models unveiling hidden relationships between input and output variables [9]. Compared to the sensor fusion approach, this approach utilizes the user image references to increase the positioning and localization of the user. In this regard, Visual Simultaneous Localization And Mapping (VSLAM) is one of the most utilized methods to provide vision-based solutions in the augmented reality domain [10]. This approach can accurately determine the user pose. However, it needs higher computation resources than the sensor fusion approach and also requires a huge amount of image references [11].
The advent of cloud and computer vision techniques has driven recent progress in the AR anchor position, where consistent accuracy improvements are observed. Previously, we successfully improved the limitation of the sensor fusion approach with the integration of VSLAM and Google Street View (GSV) to enhance its accuracy in outdoor settings. With the GSV imagery references and cloud-matching approach, we minimize the necessity of the huge number of data points stored in a dedicated server, which is common in VSLAM implementation. ARCore, also known as Google Play Services for AR, was used as the AR engine library due to its compatibility with the AR environment for Android development [12]. However, further analysis of this method is still unexplored to ensure the feasibility of this method’s implementation.
Many research works on AR systems utilize visual approaches and investigate their performances with various indicators and matrix parameters such as localization accuracy, tracking time, and running load computations [13,14,15]. However, the influence of ambient light intensity on the performance of VSLAM/GSV integration in outdoor AR applications remains a critical research area. The impact of varying light conditions on the accuracy and reliability of visual data integrations for outdoor LAR implementations is underexplored, posing a significant challenge in optimizing outdoor AR experiences.
This study investigates the impact of different light conditions on the performance of VSLAM/GSV integration in outdoor LAR setups. Understanding how ambient light intensity affects visual tracking, feature matching, alignment precision, and pose estimation in outdoor LAR scenarios is essential for improving the accuracy and reliability of AR applications. Additionally, identifying the minimum light intensity threshold necessary for VSLAM/GSV to function efficiently and accurately is crucial to ensure high performance and reliability in real-world implementations. By addressing this gap, this study aims not only to highlight the impact of light intensity on system performance but also to provide insights into optimizing vision-based approach implementation in outdoor AR scenarios for future similar research directions.
For evaluations, multiple experiments at five Point of Interest (POI) locations under various lighting conditions were rigorously analyzed using descriptive statistics, Analysis of Variance (ANOVA), and confidence interval estimation to assess performance variations based on ambient light intensity. The results identified a significant correlation between light intensity and the precision of the VSLAM/GSV implementation in outdoor augmented reality scenarios. In addition, we identified the minimum illuminance level required for the VSLAM/GSV approach to maintain consistent accuracy. Environmental factors, such as wet surfaces, influenced tracking time and accuracy, emphasizing the importance of considering real-world conditions in optimizing VSLAM/GSV performance. These insights can guide the establishment of a minimum ambient light intensity threshold for optimal functionality and accuracy of visual approaches like VSLAM and GSV in outdoor AR scenarios.
The remainder of this article is organized as follows: Section 2 introduces the related work. Section 3 describes the prototype of the LAR system used in the experiment, which was developed based on our previous research. Section 4 presents the experiment design and testing scenarios of the experiment. Section 5 shows the results and discusses their analysis. Finally, Section 6 provides conclusions with future work.

2. Related Works in the Literature

Previous studies have explored various methodologies to improve LAR localization and mapping in outdoor environments, including visual positioning and integration of VSLAM with innovative technologies such as GSV. In this section, we provide an overview of related works that support the findings presented in this paper.

2.1. Visual Positioning in Outdoor LAR

Visual positioning plays a crucial role in outdoor LAR applications, enabling precise localization and mapping in dynamic environments. Previous research has explored the integration of visual positioning techniques, such as image-based tracking and feature extraction, to enhance the accuracy and robustness of outdoor AR experiences. Research evidence shows that the visual positioning approach can provide a good level of user pose and localization with sufficient data references [16,17,18].
In [19], Fernández et al. conducted a comprehensive comparative analysis of the various methods of describing panoramic images in the field, with the aim of identifying the most appropriate techniques for visual navigation tasks in the field, including the GSV database. The research focused on evaluating the accuracy of different algorithms in location and computational efficiency. This study provides valuable insights into the optimization of visual descriptions for effective outdoor navigation in urban environments. Using visual indicators from the surrounding environment, visual positioning methods improve the accuracy of AR content alignment and immersive user interaction in outdoor environments.
In [20], Wang et al. propose a scene recognition and target tracking system combining Faster R-CNN for object detection with AKAZE features and image matching. While they evaluate accuracy, training time, and recognition time, their work lacks assessment of image quality and environmental factors like illumination and seasonal variations.
In [21], Huang et al. introduce a rotation-invariant estimation method for high-precision geo-registration in AR maps, which improves the accuracy of generating heading data from low-cost hardware by utilizing real-time kinematic GPS and visual-inertial fusion. Unfortunately, this research does not discuss the detailed effect of visual cognitive difficulties on system performance.

2.2. VSLAM and GSV in AR Applications

VSLAM technology is promising in improving the precision of AR anchor points in outdoor environments. Using real-time camera pose estimation and feature matching, VSLAM enhances location accuracy and robust alignment of AR content in outdoor scenarios. The integration of VSLAM in AR applications offers a compelling solution for scenarios where accuracy is paramount, such as pedestrian navigation and outdoor visualization.
In [22], Jinyu et al. used visual sensors such as single or multiple cameras to estimate camera positions and scene structures according to the theory of multiview geometry in VSLAM implementations. VSLAM algorithms extract features from camera images and use them to estimate the camera’s movements and scene 3D structures. The algorithm usually consists of two main components: the extraction and tracking of features, as well as the estimate and mapping of positions. This study presents a detailed explanation of the implementation of VSLAM in an AR monocular camera, including the basic algorithm and limitations.
In [23], Xu et al. improve an ORB RGB-D SLAM with a 2D Occupancy Grid Map (OGM). The 2D OGM is constructed using 3D camera positions estimated by visual SLAM (vSLAM) and laser scanning extracted from camera-observed point clouds from these positions. In addition, the visualization tool of the Robot Operating System (ROS) is used to overlay the current camera positions and observations in real time using virtual laser scans on OGM. The evaluation results showed high accuracy of localization. Unfortunately, this method can only be applied in indoor scenarios.
In [24], Sukimura et al. introduced an open-source visual SLAM framework with high usability and extensibility to enhance spatial understanding and AR experiences in the context of VSLAM. By addressing the limitation of ORB-SLAM2, their work contributes to the practical deployment of VSLAM solutions in a monocular camera device such as smartphones. We adopted this work as a basic VSLAM library in this paper.
GSV serves as a valuable resource for AR implementations, providing a vast database of imagery references for various outdoor locations worldwide. By leveraging GSV data, AR applications can enhance spatial recognition and improve the alignment of virtual content with real-world environments. Incorporating GSV imagery into AR implementations contributes to more immersive and contextually relevant user experiences, particularly in outdoor settings.
In [25], Biljecki et al. prove that the extensive reference of geospatial and visual data offered by GSV can be useful for location-based service applications. This extensive database not only enhances the visual reference matching process but also provides accurate geospatial data for improved localizations without collecting reference image data manually [26].
In [27], Graham et al. explored the application of GSV images and geotagged photographs from Google Maps as reference points within AR systems in urban environments. Their research demonstrated how integrating cloud-based imagery services can enhance the accuracy and reliability of AR experiences by providing a rich, detailed visual database that aids in the localization and mapping processes.
The above-mentioned studies have demonstrated the potential of combining VSLAM technology with the extensive visual reference database provided by GSV to enhance sensor fusion performance and localization accuracy in outdoor LAR applications. By harnessing the rich visual data from GSV and the real-time mapping capabilities of VSLAM, our previous implementation aims to address the limitations of traditional GPS-based methods and elevate the quality of outdoor AR experiences.

2.3. AR Performance and Experience Measurement

Studies measuring the AR experience from multiple factor evaluation are becoming more prevalent. In their study, Chalhoub and Ayer [28] investigated the effectiveness of AR applications in construction layout tasks by comparing the efficiency, accuracy, and user effort between AR and traditional paper methods. They involved 32 electrical construction practitioners in completing layout tasks using both AR and paper documentation. The results showed that AR facilitated faster point layout and reduced physical and mental demands compared to paper methods. This research offers valuable insights into the practical use of AR in construction, highlighting its advantages and limitations for practitioners and researchers in developing and evaluating AR technologies for construction tasks. However, this research did not consider the importance of marker detection factors such as the quality of reference images and light conditions during the experiment, which could be an area for future investigation.
Recent studies have also mentioned the significance of considering ambient light conditions in AR performance investigation, as the influence of ambient light intensity on AR environments is a critical factor that can impact the performance and reliability of AR systems, particularly in outdoor settings [29,30,31]. Understanding the implications of ambient light on AR experiences is essential for optimizing AR applications under different lighting conditions and ensuring consistent performance across diverse environmental settings.

3. LAR System for Experiments

In this section, we briefly introduce our previous work, which proposes the VSLAM/GSV implementation in outdoor LAR scenarios and the enhancement that we made in this study to support our performance investigation. We propose a hypothesis based on user experience theory to investigate users’ perceptions of AR and its effects on user satisfaction. In [32], Spittel et al. state that the quality of the AR object anchor point and the precise POI alignment are key factors for immersive AR experiences. Our hypothesis focuses on the impact of various light conditions to provide immersion information that accurately aligns with the user’s point of view. Our hypothesis centers on the premise that different lighting conditions, including daylight, overcast, rainy, and low-light environments, significantly affect the performance metrics of VSLAM/GSV implementation, leading to variations in feature matching, accuracy, and precision of AR object placement. Through rigorous experimentation and analysis, we seek to validate this hypothesis and gain insights into the adaptive strategies required to optimize outdoor LAR systems under diverse lighting scenarios.

3.1. Overview of VSLAM and GSV Integration Approach

The VSLAM/GSV integration presents a state-of-the-art approach to enhancing the precision and accuracy of AR applications in outdoor environments. By combining the real-time mapping capabilities of VSLAM with the extensive visual reference database provided by GSV, our previous research aims to improve sensor fusion performance and localization accuracy in outdoor LAR scenarios. This integration offers a comprehensive solution for aligning AR content with the real world, leveraging the strengths of both VSLAM and GSV to deliver immersive and accurate outdoor AR experiences.
In [33], we conducted the preliminary implementation of the ARCore engine with the sensor fusion approach both in Android native and Unity for the outdoor navigation system. This LAR application integrates the sensor fusion (GPS and compasses) with the VSLAM technique. The result showed that Android native offers a better resource computation than the Unity platform. Thus, we leverage the implementation with VSLAM and GSV to support the outdoor scenario in the Android native environment. The main limitation of VSLAM is the necessity of a large image reference database. Furthermore, VSLAM may face challenges in environments with low texture or repetitive patterns, as it becomes difficult to identify unique features for tracking. Therefore, GSV cloud matching is employed to enhance image tracking and refine pose estimation.
In [34], we demonstrated that the integration of VSLAM with GSV significantly enhanced the precision of anchor points in outdoor location-based augmented reality applications in daylight conditions. The evaluation results indicated that the VSLAM/GSV integration approach facilitated efficient horizontal tracking of surface features in outdoor environments, showing responsiveness and stability in aligning horizontal surfaces. This contributed to seamless AR experiences and precise spatial alignment. Unfortunately, even though the AR anchor point’s precise alignment can greatly affect user immersion and interaction, the study highlighted the necessity of higher computational and memory load on the device when compared to the conventional sensor fusion method. One key contributor to the increased computational load is the utilization of VSLAM and GSV data. Moreover, the system’s sensitivity to varying lighting conditions in outdoor environments remains underexplored. We hypothesize that light intensity levels can affect the accuracy of surface feature detection and tracking, potentially causing deviations in AR anchor point precision. To investigate this hypothesis, we utilized daylight data as a control group while the other various lighting conditions were utilized as experimental groups.

3.2. Implementation of LAR System

To support the investigation of varying light intensities on VSLAM/GSV performance, we re-develop our previous prototype powered by the ARCore SDK in the Android platform. Building upon our previous work [34], the current LAR system architecture has been enhanced with the integration of advanced light sensor technology to improve environmental adaptability. The LAR prototype integrates VSLAM technology with GSV to enhance point of interest (POI) alignment in real-world settings. VSLAM offers precise user localization by combining hardware and software to provide contextually relevant virtual content. The light intensity measurement feature was also added to record the light intensity in real time. In contrast to our previous implementation, the Sensor Manager API in the current system has been optimized to handle real-time data fusion from built-in smartphone light sensors. The LAR system’s architecture employs cloud and microservices, leveraging built-in smartphone sensors, Google ARCore SDK for AR rendering, VSLAM for localization, and cloud-based services to process geospatial data from Google Maps, Google Earth, and GSV.
The application starts by retrieving predefined true POI coordinates from the SEMAR server, which functions as a back-end server. The SEMAR server offers a big data environment with capabilities for data aggregation, synchronization, classification, and visualization, which is accessible via a REST API in JavaScript Object Notation (JSON) format [35]. The prototype then obtains raw user location data through GPS and IMU sensors, enhanced by the Fused Location Provider API and processed by the Location Provider Module. The VSLAM technology refines user localization, especially for vertical positioning, while Google Earth and GSV provide geospatial and visual references. The ARCore SDK generates AR objects in real-world coordinates, managed by an event handler for user interaction with the AR and map views. Throughout this process, ambient light sensors continuously capture illuminance levels. The generated AR data, including anchor IDs, estimated POI coordinates, and illuminance levels, are stored on the SEMAR server for further processing. Figure 1 illustrates the detailed system architecture of the LAR system.
The primary input in the proposed system includes sensors embedded in the user’s device: GPS, accelerometers, and digital compasses. The GPS sensor provides initial geospatial positioning, determining the user’s exact location. Accelerometers measure acceleration forces, aiding in motion sensing and movement tracking. Digital compasses provide orientation data, ensuring AR content aligns correctly with the user’s physical surroundings.
The AR engine serves as the core computational module of the location-based AR application system, comprising software libraries, frameworks, and development kits. This study employs the ARCore SDK for Android version 1.39, which supplies the necessary Application Programming Interfaces (APIs) to process sensor data, map the environment, and render 2D or 3D objects in real time. The AR engine handles sensor fusion, combining data from multiple sensors for accurate tracking and positioning. Once the user’s location coordinates (latitude and longitude) are obtained, the system enhances the sensor data with a VSLAM module and sends location data to the Google API for cloud-matching with the GSV database.
Based on the user’s location data, the system generates AR anchor objects. The LAR application accesses sensor data, communicates with the AR engine, and determines the placement of AR objects within the user’s field of view to create immersive, contextually relevant experiences. Figure 2 illustrates the general workflow of how VSLAM operates in conjunction with GSV. The detailed methods and development processes of the proposed prototype are described in previous research [34].
The primary function of the proposed system is to identify the user’s location and display stored AR objects. Figure 3 illustrates sample interfaces in the LAR prototype. Upon opening the application, users are presented with a 2D map serving as an object browser, allowing exploration of the surrounding area and potential viewing of stored AR objects within the map (Figure 3a). Since this prototype is in the preliminary stage, the AR anchor objects are not intended for navigation guidance but rather only for testing purposes, such as generating an AR object in the Point of Interest (POI) location (Figure 3b). Essential LAR positioning variables, including the horizontal position (X-axis), the vertical position (Y-axis), and the device heading (Z-axis), are displayed at the top of the screen to facilitate testing, with a color code indicating accuracy levels (red for low accuracy and green for high accuracy).

4. Experiment Design and Implementation

In this section, we provide a detailed discussion of the experimental design and implementation, outlining the methodology used to investigate the impact of varying light conditions on the integration of VSLAM/GSV in outdoor LAR systems. The rationale behind selecting specific POI locations, the data collection process, and the statistical analyses performed to evaluate system performance under different lighting scenarios are elaborated.

4.1. Experimental Design

This study was designed to comprehensively assess how lighting conditions impact the performance of the integration approach in outdoor LAR systems. We developed the following hypothesis: the null hypothesis (H0) is no significant difference between the categories of the independent variable (natural light condition) with respect to the accuracy, while the alternative hypothesis (H1) suggested that there is a significant difference. By conducting experiments at five specific POI locations, the study investigated the influence of ambient light intensities on feature matching, accuracy, and precision of AR object placement.
Descriptive statistics were used to analyze the data, providing insight into performance variations under different lighting scenarios. Four lighting conditions—daylight, overcast, rain, and low light—were considered as independent variables to evaluate system performance. The light intensities can vary in each seasonal condition and geographical location. Previous research in [36,37,38] recorded the typical illuminance levels in all seasons as follows: daylight is in the range of 1000 100,000   lx , overcast is 400 1000   lx , rain is 100 1000   lx , and low light is less than 200   lx . The various light intensities in daylight conditions are used as the control group while the rest are used as experimental groups. The sample includes 100 datasets from five designated POIs within four different lighting conditions. In this study, measurement metrics with reliable significance levels of 95% and a level of significance of 0.05 ( p = 0.05 ) were adopted from the existing literature [39]. The experimental design flow is depicted in Figure 4.

4.2. Hardware Setup for Data Collection

This subsection outlines how the devices were positioned, connected, and utilized to capture relevant data points or measurements. Data collections for this study were performed using Sony Xperia XZ1 G8342 smartphone for all experiments at POI locations to avoid inconsistency in the performance data. This device is equipped with an advanced suite of sensors that enhance its suitability for outdoor data collection in AR applications. Key specifications of Xperia XZ1 can be seen in Table 1.
In this phase, the testing smartphone was installed with the LAR application, which also enables the GPS, camera, IMU sensors, and Internet connection. The setup also included ensuring a proper timing schedule for lighting conditions for each testing session to accurately simulate real-world outdoor scenarios. Data collection procedures were standardized across all testing sessions in all POI locations to maintain consistency and reliability in the captured datasets.

4.3. Testing Locations and Environments

This study conducted rigorous evaluations in diverse outdoor scenarios to verify the experiment result. Five urban settings at Okayama University, Japan, were used as testing locations, including specific university buildings, student dormitories, and common public areas such as parking areas. We also chose the POI locations based on the GSV services availability. The choice of diverse outdoor environments aimed to emulate real-world conditions where LAR applications are commonly used by people, especially students, such as pedestrian navigation, tourism, and outdoor gaming. The performance investigations were conducted in winter conditions at different time intervals. Ambient light levels were classified into four categories: daylight, day overcast, rainy, and low light. This categorization allowed for a comprehensive analysis of the impact of different lighting conditions on the performance of the LAR system in winter settings.

4.3.1. Definitions of Variables

In the context of the performance investigations of VSLAM/GSV performance in outdoor location-based augmented reality under various lighting conditions, several key variables are defined to provide clarity and structured to the study. Light intensity (lx) is utilized to measure the ambient light level, serving as a critical environmental factor influencing the performance of visual tracking, feature matching, alignment precision, and pose estimation in outdoor LAR scenarios. The distance error value (meters) is calculated from the distance differentiation of actual predefined POI location coordinates (latitude and longitude) and estimated ground truth POI coordinates, which is generated from the LAR app. The tracking process time (second) was also captured to investigate the impact of light intensity on the horizontal surface tracking efficiency.

4.3.2. Independent Variable Selection

In this study, different levels of ambient light intensity are considered to assess their impacts on surface tracking performances, alignment precisions, and pose estimation errors in the prototype implementations. The intensity of ambient light is measured using illuminance ( lx ), which measures the amount of luminous flux spread over a given area. Illumination or lighting intensity is a flux of light going down on an area of one square meter. The intensity of the light flux itself can be interpreted as the strong intensity of the light radiated by a light source. The greater the lx of light generated, the higher the brightness level. The mathematical formula of the illuminance calculation is explained in Equation (1) [40].
E = Φ A
where:
E = illuminance   in   lux   ( lx ) ;
Φ = luminous   flux   in   lumens   ( lm ) ;
A = area   in   square   meters   ( m 2 ) .
Equation (1) means that 1 lux is equal to 1 lumen per square meter (1 lx = 1 lm/m2). In order to measure illuminance values, we develop a light sensor module in our LAR prototype. The Android Sensor Manager class was utilized in this module. The measurement capability range of the embedded Android smartphone light sensor on average is from 0 lx to 100,000 lx [41]. The ambient light levels were then categorized into four categories: daylight, day overcast, rainy, and low light. The illumination values were recorded in each POI experiment and then the time taken for the system to track and align virtual content with real-world surroundings under different light conditions is also counted in the dataset. By selecting these independent variables, this study aims to provide a comprehensive analysis of the performance of VSLAM/GSV integration in outdoor LAR scenarios.

4.4. Data Collection and Processing

As a way to concretely assess the effect of ambient light intensity on POI anchor point accuracy and precision, a direct comparison experiment was conducted across various testing environments. The evaluation involved five POIs around Okayama University, Japan, with each POI undergoing five testing sessions using the same device for consistency. Each POI was carefully prepared, ensuring GPS data and GSV services were available and the LAR system was ready for data capture. Four lighting conditions—daylight, overcast, rainy, and low light—served as independent variables to evaluate system performance under different ambient light intensities. Testing was conducted from 10:00 AM to 4:00 PM for daylight, day overcast, and rainy, and after 6:00 PM for low light.
Data on true anchor point coordinates, estimated coordinates, and tracking process times were collected at each POI and lighting condition. The true anchor location represented the designated coordinate (latitude and longitude) for the AR POI to be placed, while the estimated location was the ground truth coordinates generated by the LAR prototype. The data were sent to the SEMAR server for further analysis. The process was repeated over different days, resulting in a total of 100 datasets (5 POIs × 5 sessions × 4 conditions), providing a comprehensive understanding of the performance of the VSLAM/GSV integration under varying light intensities. This iterative testing approach with consistent device usage enabled a rigorous evaluation of the accuracy of the LAR anchor point estimation in real-world outdoor environments. Figure 5 depicts the data collection and comparison framework in this study.
The collected data will be analyzed to assess the impact of lighting conditions on the system’s performance and to identify any trends or patterns that may influence the effectiveness of the VSLAM/GSV integration approach. The ANOVA analysis was a key component of the experimental design; it can help determine if there are statistically significant differences in the performance metrics of the VSLAM/GSV integration approach under different lighting conditions. By comparing the means of alignment precision and pose estimation errors across varying light intensities, ANOVA can provide insights into how ambient light impacts the system’s performance. This analysis can help identify the optimal lighting conditions for the VSLAM/GSV approach to function effectively. Moreover, the coefficient interval estimation is employed to determine the range of ambient light intensities within which the VSLAM/GSV integration approach consistently provides accurate results. By calculating confidence intervals for the accuracy metric at different light levels, researchers can establish a threshold lux value below which the system’s performance may be compromised. This information is crucial for setting operational guidelines and ensuring the reliability of the application in outdoor LAR scenarios. Figure 6 illustrates experimental conditions in daylight, overcast, and rainy settings, with a drop in system performance observed in low-light conditions (see Figure 6d).

4.4.1. Error Distance Calculation

The distance error evaluation aims to assess the accuracy and precision of AR object placement. Based on our previous research result in [34], we used the same evaluation method to calculate the distance error between the predefined AR object location and the ground truth estimated location generated by the LAR system. Using data on true anchor point coordinates and estimated coordinates, we measured the distance between the predefined locations and the estimated anchor points at each Point of Interest (POI) location. We then calculated the Mean Error (ME) and Standard Deviation (SD). ME provides an overall measure of accuracy by averaging the differences between true and estimated distances, while SD indicates the variability and consistency of these differences. ME and SD were computed to compare the precision of AR anchor alignment at each POI location.
To calculate distance errors, we employed the Haversine formula, which computes the distance between two points on the surface of a sphere based on their latitudinal and longitudinal coordinates [42]. In this context, the Haversine formula calculates the great-circle distance between the true location ( ϕ A , λ A ) and the estimated location ( ϕ B , λ B ) for each POI coordinate. The detailed equations of the Haversine formula are provided in Equations (2) and (3).
θ = 2 arcsin sin 2 Δ ϕ 2 + cos ( ϕ A ) cos ( ϕ B ) sin 2 Δ λ 2 ,
where:
A = true   coordinate ;
B = estimated   coordinate ;
θ = central   angle   between   two   points ;
ϕ A , ϕ B = latitudes   of   two   points ;
λ A , λ B = longitudes   of   two   points ;
Δ ϕ = ϕ B ϕ A ( difference   in   latitude   between   the   two   points ) ;
Δ λ = λ B λ A ( difference   in   longitude   between   the   two   points ) .
The distance D between two points is given by
D = θ × 6371   km ,
where:
D = distance   between   two   points ;
The constant 6371 = mean   radius   of   Earth   in   kilometers .
From the collected coordinate values, we calculated the value of the Mean Error (ME) and Standard Deviation (SD) for each POI location using Equations (4) and (5), consecutively [43].
M E = 1 n i = 1 n T r u e i E s t i m a t e d i ,
S D = i = 1 n ( | T r u e i E s t i m a t e d i | M E ) 2 n ,

4.4.2. Horizontal Surface Tracking Time Calculation

The tracking times are crucial for evaluating system efficiency in horizontal surface tracking tasks and assessing performance variations under different environmental conditions. Data on surface tracking time is collected across various lighting scenarios to understand system responsiveness and reliability. A specific application module was implemented to measure the duration in milliseconds required for the VSLAM/GSV to track and align surfaces horizontally. The process involves detecting surface features, and feature extraction from the camera frames to identify distinctive surface features for tracking. Once surface features are detected, the system starts tracking these features across consecutive frames to monitor their movement and alignment in real time. The tracking process begins as soon as the system identifies relevant surface features for horizontal alignment. The tracking process continues until the system successfully aligns the AR object on the horizontal surface of the ground. The time taken from the initiation of tracking to the completion of surface alignment is recorded for each tracking instance.

5. Experiment Results and Analysis

This section interprets the experimental results and discusses their implications for the effectiveness of VSLAM/GSV integration in outdoor LAR applications. The comprehensive evaluation conducted at five POI locations in Okayama University, Japan, under varying lighting conditions, yielded significant data for analysis. This section presents a comparative analysis of error distances and their interpretation regarding the illuminance levels that affect the effectiveness and accuracy of the VSLAM/GSV integration approach.

5.1. Comparative Analysis Result

The analysis focuses on evaluating the accuracy of the VSLAM/GSV integration by comparing designated anchor point coordinates with estimated coordinates under different lighting conditions. Discrepancies between actual and estimated positions at each POI location are examined to identify trends related to lighting conditions affecting system accuracy.
Horizontal tracking time data represents the efficiency of the integration approach in maintaining the horizontal alignment of visual data outdoors. By assessing the time taken to track and align features at each POI location, the analysis provides insights into system performance in real-world scenarios and its precision across varying light intensities. Experimental results of the distance error and surface tracking test are summarized in Table 2.
To verify our hypotheses, we split the sample data into four groups based on natural light conditions in daylight, day overcast, rainy, and low light, respectively. First, we compare the three groups and then we add the low-light group category in our dataset. Given a partition in the sample data, a one-way ANOVA test with a significance level ( p = 0.05 ) is applied to determine the significance difference from each group. The results of the two clustering scenarios as a box plot of the distance error for each group are summarized in Figure 7. The vertical axis corresponds to the mean of the distance error measurements. Different natural light conditions are clearly labeled and organized in separate groups.
In Figure 7a, for the three-group comparison, the null hypothesis (H0) posited that there would be no significant difference in performance across the different light conditions, while the alternative hypothesis (H1) suggested that at least one of the conditions would lead to significantly different performance outcomes. The ANOVA results indicated that there was no statistically significant difference between daylight, overcast, and rainy conditions (F(2,72) = 2.2, p = 0.118). As the p-value ( 0.118 ) was greater than the significance level of 0.05 . Consequently, we fail to reject the null hypothesis and conclude that there is no significant difference in the accuracy of VSLAM/GSV integration across the three different natural light conditions tested in this study.
In Figure 7b, interestingly, in the four-group comparison, including low light conditions, the ANOVA results revealed a highly significant effect of light conditions on performance (F(3,96) = 4720.52, p < 0.001). This indicates that adding the low light condition introduces a significant variance in performance, demonstrating that the low light condition significantly impacts the accuracy and reliability of the VSLAM/GSV approach. The significant F-statistic ratio and extremely low p-value in the four-group comparison highlight the substantial influence of low-light conditions on system performance. This suggests that future optimizations and calibrations of outdoor AR systems should particularly address the challenges posed by low-light environments to ensure consistent and reliable performance.

5.2. Minimum Illumination Threshold

The confidence interval estimation is utilized to assess the precision and reliability of experimental results related to ambient light intensity and system performance. By calculating the confidence interval, we can determine the range of values within which the true effects of varying light conditions on the accuracy metrics of the VSLAM/GSV approach are likely to fall [44]. This evaluation aims to identify the minimum lighting threshold for consistent accuracy in outdoor LAR implementation, guiding optimal system performance. To obtain the minimum value from the experiment result, the confidence interval is used to calculate the lower and upper illuminance in each natural lighting category. The confidence interval calculation involves sample mean, confidence level, population standard deviation, and sample size [45]. Equation (6) shows the confidence interval estimation formula.
CI = X ¯ ± Z σ n
where:
X ¯ = the   sample   mean ;
Z = the   desired   confidence   level ,   we   use   95 % ;
σ = is   the   population   standard   deviation ;
n =   the   sample   size .
Table 3 displays the confidence interval estimation in the four group categories, presenting the illuminance levels (lx) under different natural light conditions. The table includes the sample size (n), the lower and upper confidence interval values, the mean and standard deviation of distance error, and the mean horizontal surface tracking time in seconds for each lighting condition (daylight, overcast, rainy, lowlight). These statistical values provide insights into the relationship between ambient light intensity and the accuracy metrics of the VSLAM/GSV integration approach under varying lighting scenarios.
The scatter plot with a fitted power curve in Figure 8 illustrates the relationship between illuminance (lx) and distance error (meters) in VSLAM/GSV integration. Each blue dot represents an individual measurement of distance error at a specific illuminance level. The equation y = 40.795 x 0.51 with an R 2 value of 0.4226 demonstrates a strong inverse relationship, where increased illuminance significantly reduces distance error. The R 2 value indicates that 42.26% of the variability in distance error can be explained by illuminance levels.
Notably, the green dashed line at 434 lx represents the lower confidence interval estimation value for a 95% confidence level. This threshold signifies the minimum illuminance required to achieve reliable performance, as errors below this value (left of the line) are significantly higher, often exceeding 9 m. Beyond this threshold, distance errors decrease significantly and stabilize below 1 m at higher illuminance levels, illustrating the critical impact of adequate lighting on the accuracy and reliability of the AR system. This emphasizes the necessity of maintaining at least 434 lx in practical applications to ensure optimal functionality.

5.3. Discussion

The hypothesis of this study examines how varying lighting conditions impact the performance of VSLAM/GSV localizations in outdoor LAR systems. This research aims to determine how ambient light intensity affects the accuracy and reliability of VSLAM/GSV integration in outdoor AR applications. While our previous research has shown consistent error distances under daylight conditions, this study investigates the effects of different lighting conditions on error propagations, demonstrating the system’s adaptability and robustness. One key aspect of our hypothesis is the assertion that different lighting conditions, such as daylight, overcast, rainy, and low-light environments, play a significant role in shaping the performance metrics of VSLAM/GSV implementation. For example, in well-lit environments, the feature-matching and pose-estimation processes may exhibit higher accuracy and precision due to better visibility of visual cues. On the other hand, low-light conditions can pose challenges for the system in terms of tracking accuracy and object placement.
The proposed hypothesis has been validated through rigorous experiments at five Point of Interest (POI) locations under different lighting conditions. The analysis of descriptive statistics and confidence interval estimation highlighted the impact of the light intensity on the surface and feature matching performances, alignment precision, and pose estimation errors. The results indicated that the integration approach can achieve feasible and consistent accuracy levels at a minimum illuminance range of 434   lx . This threshold value signifies the optimal lighting condition for outdoor LAR applications.
In [46], Daugaard et al. suggest that the minimum illumination level can be achieved after 6:00 AM until 20:00 PM conservatively to ensure that the system can achieve reliable performance and precise localization. By identifying the minimum illuminance level required for VSLAM/GSV to maintain consistent accuracy, this study provides valuable insights for optimizing visual approaches in outdoor AR scenarios. This information can guide the development of specialized algorithms and adaptive strategies to ensure high performance and accuracy in low-light conditions, where traditional visual tracking methods may face challenges. Estimating lighting mechanisms and technology in AR implementation will be an important aspect of achieving lighting consistency in practical applications [47].
Furthermore, variations in visual references, such as wet surfaces during rainy conditions, were found to influence the horizontal tracking time and accuracy metrics (see Table 3). With the surface tracking time value affected, variations in surface properties, such as reflectivity, texture, and color, also need to be examined to understand how different surfaces interact with varying light conditions and influence the accuracy of the integration approach.
Comparing our results with existing literature, we note that understanding how varying light conditions influence surface and feature matching, alignment precision, and pose estimation in the VSLAM/GSV approach is essential for optimizing AR experiences. By analyzing the impact of ambient light on this vision-based approach, developers can create adaptive strategies to enhance system performance under different lighting scenarios. This research provides valuable insights into the relationship between ambient light conditions and LAR anchor point alignment, contributing to the advancement of AR technology.
While this study has provided valuable insights into the impact of ambient light intensity on the performance of the VSLAM/GSV integration in outdoor LAR applications, there are some limitations to consider. The research highlights the influence of environmental factors, such as wet surfaces during rainy seasons, on tracking time and accuracy metrics. However, this study primarily examines the impact of light intensity on accuracy metrics. Further explorations of the effects of other environmental factors on the integration approach’s performance could provide a more comprehensive understanding of the system’s behavior in real-world settings. Understanding how different surfaces interact with varying light conditions can help developers design more robust and reliable LAR systems that can adapt to real-world environmental challenges.

6. Conclusions

This study emphasizes the impact of the ambient light intensity on VSLAM/GSV performances in outdoor LAR applications. By understanding the impact of the ambient light intensity on a vison-based approach, developers can implement adaptive strategies to enhance the outdoor LAR system’s performance under various lighting conditions. Through rigorous experiments at five POIs in Okayama University, Japan, with 100 datasets, descriptive statistic methods, including ANOVA and confidence interval estimations, were employed to analyze the performance variation. A significant correlation ( p < 0.05 ) between the light intensity and accuracy was observed, with consistent accuracy achieved at a minimum of 434   lx . This research primarily examined the influence of the light intensity on accuracy metrics with a limited dataset. Future researchers should expand data collection and consider additional environmental factors for a more comprehensive analysis. Future research directions will include extending the implementation of VSLAM/GSV for outdoor LAR scenarios, user experience studies, the optimization for low-light conditions, and the development of dynamic light adaptation algorithms to enhance the system performance in diverse lighting environments. These efforts aim to enhance the precision, reliability, and user satisfaction in outdoor LAR applications, ensuring immersive AR experiences across different lighting conditions.

Author Contributions

Conceptualization, K.C.B. and N.F.; methodology, K.C.B. and N.F.; software, K.C.B. and Y.Y.F.P.; visualization, K.C.B. and P.A.R.; investigation, K.C.B. and M.M.; writing—original draft, K.C.B.; writing—review and editing, N.F.; supervision, N.F. All authors have read and agreed to the published version of this manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study as involving humans is only for obtaining the real user location coordinates in the testing phase to validate the feasibility of our developed system.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank all the colleagues in the Distributing System Laboratory, Okayama University, who were involved in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brata, K.C.; Liang, D. Comparative study of user experience on mobile pedestrian navigation between digital map interface and location-based augmented reality. Int. J. Electr. Comput. Eng. 2020, 10, 2037. [Google Scholar] [CrossRef]
  2. Asraf, S.M.H.; Hashim, A.F.M.; Idrus, S.Z.S. Mobile application outdoor navigation using location-based augmented reality (AR). J. Phys. Conf. Ser.. 2020, 1529, 022098. [Google Scholar] [CrossRef]
  3. Sasaki, R.; Yamamoto, K. A sightseeing support system using augmented reality and pictograms within urban tourist areas in Japan. ISPRS Int. J. Geo-Inf. 2019, 8, 381. [Google Scholar] [CrossRef]
  4. Santos, C.; Araújo, T.; Morais, J.; Meiguins, B. Hybrid approach using sensors, GPS and vision based tracking to improve the registration in mobile augmented reality applications. Int. J. Multimed. Ubiquitous Eng. 2017, 12, 117–130. [Google Scholar] [CrossRef]
  5. Siegele, D.; Di Staso, U.; Piovano, M.; Marcher, C.; Matt, D.T. State of the art of non-vision-based localization technologies for AR in facility management. In Augmented Reality, Virtual Reality, and Computer Graphics: Proceedings of the 7th International Conference, AVR 2020, Lecce, Italy, 7–10 September 2020; Proceedings, Part I; Springer: Berlin/Heidelberg, Germany, 2020; pp. 255–272. [Google Scholar] [CrossRef]
  6. Uradziński, M.; Bakuła, M. Assessment of static positioning accuracy using low-cost smartphone GPS devices for geodetic survey points’ determination and monitoring. Appl. Sci. 2020, 10, 5308. [Google Scholar] [CrossRef]
  7. Brata, K.C.; Liang, D. An effective approach to developing location-based augmented reality information support. Int. J. Electr. Comput. Eng. 2019, 9, 3060. [Google Scholar] [CrossRef]
  8. Azuma, R.; Billinghurst, M.; Klinker, G. Special section on mobile augmented reality. Comput. Graph. 2011, 35, vii–viii. [Google Scholar] [CrossRef]
  9. Al-Zoube, M.A. Efficient vision-based multi-target augmented reality in the browser. Multimed. Tools Appl. 2022, 81, 14303–14320. [Google Scholar] [CrossRef]
  10. Sharafutdinov, D.; Griguletskii, M.; Kopanev, P.; Kurenkov, M.; Ferrer, G.; Burkov, A.; Gonnochenko, A.; Tsetserukou, D. Comparison of modern open-source visual SLAM approaches. J. Intell. Robot. Syst. 2023, 107, 43. [Google Scholar] [CrossRef]
  11. Zhou, X.; Sun, Z.; Xue, C.; Lin, Y.; Zhang, J. Mobile AR tourist attraction guide system design based on image recognition and user behavior. In Intelligent Human Systems Integration 2019: Proceedings of the 2nd International Conference on Intelligent Human Systems Integration (IHSI 2019): Integrating People and Intelligent Systems, San Diego, CA, USA, 7–10 February 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 858–863. [Google Scholar] [CrossRef]
  12. ARCore—Google Developers. Available online: https://developers.google.com/ar (accessed on 11 January 2023).
  13. Baker, L.; Ventura, J.; Langlotz, T.; Gul, S.; Mills, S.; Zollmann, S. Localization and tracking of stationary users for augmented reality. Vis. Comput. 2024, 40, 227–244. [Google Scholar] [CrossRef]
  14. He, J.; Li, M.; Wang, Y.; Wang, H. OVD-SLAM: An online visual SLAM for dynamic environments. IEEE Sens. J. 2023, 23, 13210–13219. [Google Scholar] [CrossRef]
  15. Liu, H.; Zhao, L.; Peng, Z.; Xie, W.; Jiang, M.; Zha, H.; Bao, H.; Zhang, G. A Low-cost and Scalable Framework to Build Large-Scale Localization Benchmark for Augmented Reality. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 2274–2288. [Google Scholar] [CrossRef]
  16. Reljić, V.; Milenković, I.; Dudić, S.; Šulc, J.; Bajči, B. Augmented reality applications in industry 4.0 environment. Appl. Sci. 2021, 11, 5592. [Google Scholar] [CrossRef]
  17. Kiss-Illés, D.; Barrado, C.; Salamí, E. GPS-SLAM: An augmentation of the ORB-SLAM algorithm. Sensors 2019, 19, 4973. [Google Scholar] [CrossRef]
  18. Tourani, A.; Bavle, H.; Sanchez-Lopez, J.L.; Voos, H. Visual SLAM: What are the current trends and what to expect? Sensors 2022, 22, 9297. [Google Scholar] [CrossRef]
  19. Fernández, L.; Payá, L.; Reinoso, O.; Jiménez, L.; Ballesta, M. A study of visual descriptors for outdoor navigation using google street view images. J. Sens. 2016, 2016. [Google Scholar] [CrossRef]
  20. Wang, J.; Wang, Q.; Saeed, U. A visual-GPS fusion based outdoor augmented reality method. In Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry, Tokyo, Japan, 2–3 December 2018; pp. 1–4. [Google Scholar] [CrossRef]
  21. Huang, K.; Wang, C.; Shi, W. Accurate and Robust Rotation-Invariant Estimation for High-Precision Outdoor AR Geo-Registration. Remote Sens. 2023, 15, 3709. [Google Scholar] [CrossRef]
  22. Jinyu, L.; Bangbang, Y.; Danpeng, C.; Nan, W.; Guofeng, Z.; Hujun, B. Survey and evaluation of monocular visual-inertial SLAM algorithms for augmented reality. Virtual Real. Intell. Hardw. 2019, 1, 386–410. [Google Scholar] [CrossRef]
  23. Xu, L.; Feng, C.; Kamat, V.R.; Menassa, C.C. An occupancy grid mapping enhanced visual SLAM for real-time locating applications in indoor GPS-denied environments. Autom. Constr. 2019, 104, 230–245. [Google Scholar] [CrossRef]
  24. Sumikura, S.; Shibuya, M.; Sakurada, K. OpenVSLAM: A versatile visual SLAM framework. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 2292–2295. [Google Scholar] [CrossRef]
  25. Biljecki, F.; Ito, K. Street view imagery in urban analytics and GIS: A review. Landsc. Urban Plan. 2021, 215, 104217. [Google Scholar] [CrossRef]
  26. Qi, M.; Hankey, S. Using street view imagery to predict street-level particulate air pollution. Environ. Sci. Technol. 2021, 55, 2695–2704. [Google Scholar] [CrossRef]
  27. Graham, M.; Zook, M.; Boulton, A. Augmented reality in urban places: Contested content and the duplicity of code. In Machine Learning and the City: Applications in Architecture and Urban Design; John Wiley & Sons: Hoboken, NJ, USA, 2022; pp. 341–366. [Google Scholar] [CrossRef]
  28. Chalhoub, J.; Ayer, S.K. Exploring the performance of an augmented reality application for construction layout tasks. Multimed. Tools Appl. 2019, 78, 35075–35098. [Google Scholar] [CrossRef]
  29. Jeffri, N.F.S.; Rambli, D.R.A. A review of augmented reality systems and their effects on mental workload and task performance. Heliyon 2021, 7, e06277. [Google Scholar] [CrossRef]
  30. Merino, L.; Schwarzl, M.; Kraus, M.; Sedlmair, M.; Schmalstieg, D.; Weiskopf, D. Evaluating mixed and augmented reality: A systematic literature review (2009–2019). In Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Porto de Galinhas, Brazil, 9–13 November 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 438–451. [Google Scholar] [CrossRef]
  31. Papakostas, C.; Troussas, C.; Krouska, A.; Sgouropoulou, C. Measuring user experience, usability and interactivity of a personalized mobile augmented reality training system. Sensors 2021, 21, 3888. [Google Scholar] [CrossRef]
  32. Spittle, B.; Frutos-Pascual, M.; Creed, C.; Williams, I. A review of interaction techniques for immersive environments. IEEE Trans. Vis. Comput. Graph. 2022, 29, 3900–3921. [Google Scholar] [CrossRef]
  33. Brata, K.C.; Funabiki, N.; Sukaridhoto, S.; Fajrianti, E.D.; Mentari, M. An Investigation of Running Load Comparisons of ARCore on Native Android and Unity for Outdoor Navigation System Using Smartphone. In Proceedings of the 2023 Sixth International Conference on Vocational Education and Electrical Engineering (ICVEE), Surabaya, Indonesia, 14–15 October 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 133–138. [Google Scholar] [CrossRef]
  34. Brata, K.C.; Funabiki, N.; Panduman, Y.Y.F.; Fajrianti, E.D. An Enhancement of Outdoor Location-Based Augmented Reality Anchor Precision through VSLAM and Google Street View. Sensors 2024, 24, 1161. [Google Scholar] [CrossRef]
  35. Panduman, Y.Y.F.; Funabiki, N.; Puspitaningayu, P.; Kuribayashi, M.; Sukaridhoto, S.; Kao, W.C. Design and implementation of SEMAR IOT server platform with applications. Sensors 2022, 22, 6436. [Google Scholar] [CrossRef]
  36. Bhandary, S.K.; Dhakal, R.; Sanghavi, V.; Verkicharla, P.K. Ambient light level varies with different locations and environmental conditions: Potential to impact myopia. PLoS ONE 2021, 16, e0254027. [Google Scholar] [CrossRef]
  37. Do, T.H.; Yoo, M. Performance analysis of visible light communication using CMOS sensors. Sensors 2016, 16, 309. [Google Scholar] [CrossRef] [PubMed]
  38. Preto, S.; Gomes, C.C. Lighting in the workplace: Recommended illuminance (LUX) at workplace environs. In Advances in Design for Inclusion: Proceedings of the AHFE 2018 International Conference on Design for Inclusion, Loews Sapphire Falls Resort at Universal Studios, Orlando, FL, USA, 21–25 July 2018; Springer: Cham, Swizerland, 2019; pp. 180–191. [Google Scholar] [CrossRef]
  39. Kwak, S. Are only p-values less than 0.05 significant? A p-value greater than 0.05 is also significant! J. Lipid Atheroscler. 2023, 12, 89. [Google Scholar] [CrossRef] [PubMed]
  40. Michael, P.R.; Johnston, D.E.; Moreno, W. A conversion guide: Solar irradiance and lux illuminance. J. Meas. Eng. 2020, 8, 153–166. [Google Scholar] [CrossRef]
  41. Zhao, L.; Dong, B.; Li, W.; Zhang, H.; Zheng, Y.; Tang, C.; Hu, B.; Yuan, S. Smartphone-based quantitative fluorescence detection of flowing droplets using embedded ambient light sensor. IEEE Sens. J. 2020, 21, 4451–4461. [Google Scholar] [CrossRef]
  42. Andreou, A.; Mavromoustakis, C.X.; Batalla, J.M.; Markakis, E.K.; Mastorakis, G.; Mumtaz, S. UAV Trajectory Optimisation in Smart Cities using Modified A* Algorithm Combined with Haversine and Vincenty Formulas. IEEE Trans. Veh. Technol. 2023, 72, 9757–9769. [Google Scholar] [CrossRef]
  43. Cao, H.; Wang, Y.; Bi, J.; Xu, S.; Si, M.; Qi, H. Indoor positioning method using WiFi RTT based on LOS identification and range calibration. ISPRS Int. J. Geo-Inf. 2020, 9, 627. [Google Scholar] [CrossRef]
  44. Juárez-Tárraga, F.; Perles-Ribes, J.F.; Ramón-Rodríguez, A.B.; Cárdenas, E. Confidence intervals as a tool to determine the thresholds of the life cycle of destinations. Curr. Issues Tour. 2023, 26, 3923–3928. [Google Scholar] [CrossRef]
  45. Turner, D.P.; Deng, H.; Houle, T.T. Understanding and Applying Confidence Intervals. Headache J. Head Face Pain 2020, 60, 2118–2124. [Google Scholar] [CrossRef] [PubMed]
  46. Daugaard, S.; Markvart, J.; Bonde, J.P.; Christoffersen, J.; Garde, A.H.; Hansen, Å.M.; Schlünssen, V.; Vestergaard, J.M.; Vistisen, H.T.; Kolstad, H.A. Light exposure during days with night, outdoor, and indoor work. Ann. Work. Expo. Health 2019, 63, 651–665. [Google Scholar] [CrossRef]
  47. Zhu, K.; Liu, S.; Sun, W.; Yuan, Y.; Wu, Y. A Lighting Consistency Technique for Outdoor Augmented Reality Systems Based on Multi-Source Geo-Information. ISPRS Int. J. Geo-Inf. 2023, 12, 324. [Google Scholar] [CrossRef]
Figure 1. LAR system architecture in this study.
Figure 1. LAR system architecture in this study.
Electronics 13 02930 g001
Figure 2. Simplified process flow of LAR system used in this study.
Figure 2. Simplified process flow of LAR system used in this study.
Electronics 13 02930 g002
Figure 3. Interface samples of the proposed system. (a) Map interface. (b) AR interface.
Figure 3. Interface samples of the proposed system. (a) Map interface. (b) AR interface.
Electronics 13 02930 g003
Figure 4. Experimental design of comparison test.
Figure 4. Experimental design of comparison test.
Electronics 13 02930 g004
Figure 5. Data collection flow of comparison test.
Figure 5. Data collection flow of comparison test.
Electronics 13 02930 g005
Figure 6. Screenshots of LAR prototype interfaces in various natural lighting conditions. (a) Daylight. (b) Overcast. (c) Rain. (d) Low Light.
Figure 6. Screenshots of LAR prototype interfaces in various natural lighting conditions. (a) Daylight. (b) Overcast. (c) Rain. (d) Low Light.
Electronics 13 02930 g006
Figure 7. One-way ANOVA results in a boxplot form on each group of natural lighting conditions in the experiment. (a) For three groups: daylight, overcast, and rainy, respectively. (b) Four groups with lowlight.
Figure 7. One-way ANOVA results in a boxplot form on each group of natural lighting conditions in the experiment. (a) For three groups: daylight, overcast, and rainy, respectively. (b) Four groups with lowlight.
Electronics 13 02930 g007
Figure 8. Correlation between illuminance and distance error for all POIs in the experiment and lower confidence value.
Figure 8. Correlation between illuminance and distance error for all POIs in the experiment and lower confidence value.
Electronics 13 02930 g008
Table 1. Specification of Android device.
Table 1. Specification of Android device.
SpecificationDetails
ModelSony Xperia XZ1 G8342
Android VersionAndroid 9.0 (Pie)
Resolution1080 × 1920 pixels, 16:9 ratio (424 ppi density)
ProcessorOcta-core (4 × 2.45 GHz Kryo, 4 × 1.9 GHz Kryo)
ChipsetQualcomm MSM8998 Snapdragon 835 (10 nm)
Storage64 GB
BatteryLi-Ion 2700 mAh
Available SensorsGPS, Accelerometer, Gyroscope, Compass, Barometer, Proximity, and Ambient Light
Table 2. Experiment result of each POI location in different light conditions.
Table 2. Experiment result of each POI location in different light conditions.
Natural Light ConditionsLocationsnIlluminance Level (lx) Mean/SDDistance Error (Meters) Mean/SDHorizontal Surface Tracking (Second) Mean/SD
DaylightPOI 155114.20/1453.150.71/0.060.73/0.05
POI 254573.80/1253.010.84/0.050.71/0.09
POI 354078.00/1732.910.92/0.060.57/0.06
POI 456455.40/1558.500.82/0.110.56/0.07
POI 554800.60/1415.470.81/0.020.64/0.08
Day OvercastPOI 15669.20/140.230.79/0.040.63/0.09
POI 25733.40/261.710.84/0.050.67/0.09
POI 35735.60/323.050.86/0.080.62/0.07
POI 451026.40/664.740.81/0.100.64/0.08
POI 55798.40/245.150.81/0.020.61/0.06
Day RainyPOI 15541.80/121.300.85 /0.031.38/0.38
POI 25457.00/103.110.88/0.051.29/0.40
POI 35455.00/148.000.92/0.051.49/0.61
POI 45452.40/144.690.87/0.051.32/0.14
POI 55504.20/112.870.78/0.011.56/0.17
Low LightPOI 15177.60/94.759.14/0.686.78/4.60
POI 25146.00/69.388.96/0.426.95/4.33
POI 3580.60/67.749.82/0.529.86/4.97
POI 45152.00/83.029.37/0.747.55/7.00
POI 55129.60/58.989.75/0.297.24/5.83
Table 3. Summarized confidence interval estimation in four group categories.
Table 3. Summarized confidence interval estimation in four group categories.
Illuminance (lx)
Natural Light ConditionsnLower–UpperConfidence IntervalMean/SDDistance Error Mean (Meters)Horizontal Surface Tracking Mean (Second)
Daylight254382–5627622.285004.40/1587.470.820.64
Day Overcast25650–935142.27792.60 /362.940.820.63
Day Rainy25434–53047.69482.08/121.650.861.41
Low Light25107–16730.04137.16/76.639.417.68
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Brata, K.C.; Funabiki, N.; Riyantoko, P.A.; Panduman, Y.Y.F.; Mentari, M. Performance Investigations of VSLAM and Google Street View Integration in Outdoor Location-Based Augmented Reality under Various Lighting Conditions. Electronics 2024, 13, 2930. https://doi.org/10.3390/electronics13152930

AMA Style

Brata KC, Funabiki N, Riyantoko PA, Panduman YYF, Mentari M. Performance Investigations of VSLAM and Google Street View Integration in Outdoor Location-Based Augmented Reality under Various Lighting Conditions. Electronics. 2024; 13(15):2930. https://doi.org/10.3390/electronics13152930

Chicago/Turabian Style

Brata, Komang Candra, Nobuo Funabiki, Prismahardi Aji Riyantoko, Yohanes Yohanie Fridelin Panduman, and Mustika Mentari. 2024. "Performance Investigations of VSLAM and Google Street View Integration in Outdoor Location-Based Augmented Reality under Various Lighting Conditions" Electronics 13, no. 15: 2930. https://doi.org/10.3390/electronics13152930

APA Style

Brata, K. C., Funabiki, N., Riyantoko, P. A., Panduman, Y. Y. F., & Mentari, M. (2024). Performance Investigations of VSLAM and Google Street View Integration in Outdoor Location-Based Augmented Reality under Various Lighting Conditions. Electronics, 13(15), 2930. https://doi.org/10.3390/electronics13152930

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop