Next Article in Journal
Most Relevant Spectral Bands Identification for Brain Cancer Detection Using Hyperspectral Imaging
Next Article in Special Issue
High-Level Path Planning for an Autonomous Sailboat Robot Using Q-Learning
Previous Article in Journal
AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network
Previous Article in Special Issue
Robust and Accurate Hand–Eye Calibration Method Based on Schur Matric Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Analysis and Improvements in AprilTag Based State Estimation

1
Department of Electrical Engineering, Lahore University of Management Sciences (LUMS), Lahore 54792, Pakistan
2
Department of Computer Science, University of Kaiserslautern, D-67663 Kaiserslautern, Germany
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(24), 5480; https://doi.org/10.3390/s19245480
Submission received: 31 October 2019 / Revised: 27 November 2019 / Accepted: 28 November 2019 / Published: 12 December 2019
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)

Abstract

:
In this paper, we analyzed the accuracy and precision of AprilTag as a visual fiducial marker in detail. We have analyzed error propagation along two horizontal axes along with the effect of angular rotation about the vertical axis. We have identified that the angular rotation of the camera (yaw angle) about its vertical axis is the primary source of error that decreases the precision to the point where the marker system is not potentially viable for sub-decimeter precise tasks. Other factors are the distance and viewing angle of the camera from the AprilTag. Based on these observations, three improvement steps have been proposed. One is the trigonometric correction of the yaw angle to point the camera towards the center of the tag. Second, the use of a custom-built yaw-axis gimbal, which tracks the center of the tag in real-time. Third, we have presented for the first time a pose-indexed probabilistic sensor error model of the AprilTag using a Gaussian Processes based regression of experimental data, validated by particle filter tracking. Our proposed approach, which can be deployed with all three improvement steps, increases the system’s overall accuracy and precision by manifolds with a slight trade-off with execution time over commonly available AprilTag library. These proposed improvements make AprilTag suitable to be used as precision localization systems for outdoor and indoor applications.

1. Introduction

Localization capability is the backbone of many robotic systems as it helps determine the state of the robot at a given time instance [1]. Many essential subsystems of an autonomous mobile system take localization as an input for developing maps or plan navigation strategies [2]. The nature of the application determines the level of localization accuracy required. Localization accuracy is commonly measured by comparing it with the ground truth at any given time instance. Therefore, the ground truth itself must be of superior accuracy to minimize the error in measuring localization accuracy. In robotics, there exist several ways of generating ground truth. A standard method for generating ground truth for localization is with the help of high precision motion caption cameras (MoCap) [3]. This system is considered to be the benchmark for many indoor localization systems worldwide. MoCap setup contains multiple cameras calibrated at known positions, fuse the data to track a known marker at high accuracy. For indoor applications, another commonly used method is the use of fiducial or visual marker-based localization systems. This method is quite popular because of the readiness to use it. For many applications, fiducial markers’ relative ease of use makes them the primary method for localization [4]. Besides being used for outdoor applications, GPS is the most popular source for ground truth verification [5]. Although it is globally consistent, nominal GPS systems do not provide adequate accuracy for tasks that demand sub-meter localization accuracies such as robot navigation, obstacle avoidance or structural inspection in confined environments. Some high-end GPS methodologies such as D-GPS and RKT-GPS have an accuracy of 0.1 meters or less but they are quite expensive and are hard to setup. In outdoor environments, the deployment of fiducial marker-based systems are also possible but they have limitations on operating distance and field-of-view.
AprilTag is one of the most commonly used fiducial markers that can be used both indoors and outdoors for ground truth generation in 6-DOF, but with limitations [6]. We have precisely identified these limitations and have explained the source of these limitations with statistical error models. The proposed research has established that both distance and orientation of viewing camera from the target tag effects accuracy. However, uncorrected orientation uncertainty is a more significant source of accuracy degradation. AprilTag’s accuracy is maximum when the viewing camera is pointed towards the center of the tag. Moreover, in the current implementation of the AprilTag localization system, this source of error is left unaddressed. As a result, the system suffers from a loss of performance, which is rectifiable. The proposed research has filled this gap (only for 2D) via an empirical analysis of the AprilTag system. Furthermore, a data-driven probabilistic sensor model has also been proposed, which works both in indoor and outdoor environments.
In this paper, we have proposed techniques to overcome this limitation and to increase the accuracy even for wider horizontal viewing angles. The proposed technique consists of three approaches. One is a geometric soft correction to the displacement angle from the center of the tag. Second is a active correction to the angular displacement using a custom-built gimbal which detects the tag in real-time and physically keeps the camera viewing angle towards the center of the AprilTag horizontally. The third is a proposal of a probabilistic sensor error model of the AprilTag by using Gaussian Processes (GP) based regression of experimental data. The forward sensor model is directly usable in a standard Baye’s filter for localization, mapping, SLAM or exploration algorithms [2]. We have used these approaches in combination and a detailed comparison is also presented to determine how different approaches have improved the overall precision. For example, in an ideal scenario, we have improved the accuracy from 4.4 cm to 0.8 cm in the x-axis and 2.56 cm to 0.54  cm in the y-axis. Moreover, other than the accuracy, improvement in the precision has also been achieved from 112 cm 2 to 0.29 cm 2 for the x-axis and 14 cm 2 to 0.60 cm 2 for the y-axis over a target distance of 70 cm. All the AprilTag measurements used in data error comparisons are raw and without modifications.
In Section 2, an overview of the related work on visual marker systems has been discussed. This section describes different fiducial markers, their techniques and their applications. Section 3 illustrates the problem set up and the evaluation of AprilTag as a localization system. In this section, the implementation methodology of AprilTag is briefly discussed, then details regarding transformations required for trajectory generation is discussed. Then the error measurement setup is explained along with the method for taking measurements and lately identification of the AprilTag’s shortcomings. Section 4 discusses the reasons behind the identified shortcomings and proposes improvement techniques. Lately, a detailed comparison of all the proposed improvement approaches is presented. Afterward, a probabilistic sensor model for AprilTag has been proposed by using the Gaussian Processes (GP) regression along with the experimental verification of the proposed sensor model by implementing trajectory tracking using particle filter both in a laboratory setup and in an outdoor environment. Lastly, Section 5 concludes the whole paper with the identification of future work required.

2. Related Work

A visual fiducial system uses 2D coded information embedded on a tag to give the position and the orientation of the marker to the camera. The 2D coded information also distinguishes between one marker from the other. Distinct fiducial systems are being used in robotics applications for pose estimation. All are best known for their use in augmented reality applications to support vision-based tracking [7]. Table 1 shows an overview of different commonly used fiducial markers in robotics application along with their key features.
In 2011, Olson [6] showed that AprilTag surpasses its predecessors in terms of detection rate, inter-coding Hamming distance, scale and angular-accuracy. Olson has also addressed the accuracy issues of AprilTag related to tag-detection percentage along with the camera distance from the tag. However, the results are not extensive enough to use them in creating a perfect sensor model for AprilTag and missed some necessary details which follow in the next sections. Nonetheless, because of more robustness and accuracy of AprilTag [6], many researchers have preferred it over any other visual fiducial markers so far. In 2017, Sagitov et al. [16] compared ARTag, AprilTag and CALTag for occlusions and showed that AprilTag is robust against small occlusions.
One of the advantages of AprilTag is the utilization as a low-cost localization solution in augmented reality and robotics applications. The setup requires only a monocular camera and a printed AprilTag on a paper. As a result, researchers prefer fiducial markers over other high-end localization systems. C Feng et al. [17] have used AprilTag as spatial indices for operations like navigation and inspection inside a building for engineering, construction and management related tasks. They have placed AprilTags on different parts of the building, which direct users with operation-specific information when seen through a mobile camera. Li et al. [18] combine the fiducial marker with inertial sensors to have an improved position and pose tracking of hand-held augmented reality system. They achieved an accuracy of 1.77 cm for the position and 4.15 for orientation estimation. Some researchers have used AprilTag as a landmark and track it in robotics applications. Wang et al. [19] and Wang [20] have proposed a vision-based vehicle tracking system in which an unmanned aerial vehicle (UAV) tracks a ground vehicle by using AprilTag attached to a ground vehicle. Ling et al. [21] have used AprilTag attached over a water vessel, to autonomously land an unmanned aerial vehicle (UAV) over it. Similarly, Zhang et al. [22] have used an identical approach to land the aerial vehicle over land using AprilTag. Later, a similar work regarding the autonomous landing of a quadrotor using AprilTag is done by Reference [23]. Tang et al. [24] have proposed an algorithm to fuse the data from multiple cameras and a 2D laser scanner. They have used an array of AprilTags as a target for calibration and employ a non-linear optimization technique to estimate a single camera intrinsic parameters out of multiple cameras and later fuse them with 2D laser scanner data to have an improved position and pose estimate.
Another advantage of AprilTag is for the accurate evaluation of individual localization systems or algorithms. Ramirez [25] has made a dataset for visual odometry and localization in which he used AprilTags as landmarks for accuracy evaluation. Parkison et al. [26] have used AprilTag to evaluate the position and pose of a micro aerial vehicle (MAV) for automated indoor RFID inventorying. Raina et al. [27] have used multiple AprilTags as a ground truth evaluation system for 3D pose estimation in a cluttered environment. Maragh [28] has used AprilTag to control the position and angular velocity of a rotating body using PD control. She has also demonstrated the upper limit of the angular velocity of a moving object for the robust detection of AprilTag. Similarly, Zake et al. [29] have used AprilTag measurements as a ground-truth value to compare the output of the proposed pose-based visual serving technique for cable-driven robots. Florea et al. [30] have used AprilTags to localize a drone and other multiple waypoints as they have proposed a sensor fusion technique for localization by using numerical P systems. Researchers have also used AprilTags for modeling the dynamics of different physical systems. Britto et al. [31] have used fiducial marker AprilTag to estimate the position and orientation of an unmanned underwater vehicle (UUV), later used in the dynamic model of the underwater system. Fuchs et al. [32] have used AprilTag for the kinematic modeling and trajectory generation of a trailer attached to a truck using Kalman Filter [33]. Nissler et al. have used AprilTag for the robot to camera calibration to get the exact pose of each robot part in the camera frame of reference for precise operations. Mueggler et al. [34] have used multiple AprilTags to precisely estimated the position of an aerial vehicle in a swarm rescue operation. They have successfully demonstrated in a laboratory setup because localization from AprilTags has played an integral part in the completion of the task. Xie et al. [35] have used AprilTag to find the pose and extrinsic of multi-camera and multi-LiDAR system. They have shown that by using AprilTags in calibration process improves the overall robustness and accuracy of an autonomous driving platform. Later Nissler et al. [36] have used single and multiple AprilTags with a high-end camera to estimate the position and orientation of a manipulator in an industrial environment. They have shown that the use of AprilTag can help increase the precision tasks of a manipulator. Similarly, De et al. [37] have used AprilTag for the pose estimation in visual-inertial navigation of a real-time MAV application in an indoor environment.
The disadvantage of using AprilTag as a localization system may result in erroneous localization due to multiple factors. These factors include configurations such as viewing angle, distance and camera rotation around its axis. Though AprilTag has been used in many applications ranging from virtual reality to tracking and localization, there are not many studies related to a systematic analysis of how the inaccuracy propagates over different distances and viewing angles. A similar study regarding the accurate evaluation of a similar fiducial marker (ARToolKit) has been conducted by Abawi et al. [38]. They have experimentally calculated the accuracy of ARToolKit, which is a similar fiducial marker as AprilTag but far less robust and accurate, as demonstrated by Reference [6]. They have given a conclusion that ARToolKit is accurate for short distances and for viewing angle between 40 and 80 . Furthermore, Wang et al. [39] have also proposed improvements in AprilTag but those improvements are limited to improving tag detection and lowering computational utilization and called it Apriltag 2. In 2017, Jin et al. [40] showed that the AprilTag pose output is inaccurate and noisy. They have proposed that by adding depth information along with the RGB information of the tag improves the overall pose accuracy. They have used an RGB-D camera to detect an AprilTag in an indoor setup. However, the proposed method fails outdoors as the RGBD camera does not work outdoors in direct sunlight. Zhenglong et al. [41] have used multiple AprilTags to estimate the pose of a flying quadrotor better. They have experimented in an indoor environment by laying multiple AprilTags on the floor and have flown a multirotor with a down-looking camera. A Kalman Filter with a constant velocity model has been used to estimate a more accurate pose by fusing poses from multiple AprilTags. It is shown that it has improved the overall pose estimation and matched it with a Motion Capture (MoCap). However, for a large outdoor environment, it is not possible to lay multiple AprilTags on the ground beneath a flying robot all the time for pose correction, so this makes the proposed approach not suitable for a large outdoor environment. In 2019, Kayhani et al. [42] have proposed that the raw AprilTag pose is not accurate enough for autonomous operations and has improved the accuracy of an indoor multi-copter by fusing pose data from multiple AprilTags with the help of an Extended Kalman Filter.
Some researchers have evaluated the pose accuracy of the AprilTag in indoor environments and have improved it by using multiple tags along with data fusion techniques such as Kalman Filtering. Though they have improved the detected pose accuracy by fusing data from multiple tags, their proposed setup can only be possible in small indoor environments. For a large outdoor environment, it is still an open question. Moreover, AprilTags can be used for ground-truth analysis in many autonomous applications such as self-driving cars [43]. Hence, improving AprilTag accuracy to the point where it serves as a ground-truth solution, especially outdoors, is still an open challenge.

3. Problem Setup and System Evaluation

3.1. AprilTag Working Principle

As described in the earlier section, AprilTag also uses an embedded 2D-coded marker for tag detection and to differentiate it from the other tags. The visual marker tag can be of any size with a square dimension. The tag is printed on a white background with a black outline square. Inside the square is an embedded black bar-code. AprilTag [6] uses a unique detection algorithm for fast, robust detection and to minimize the effect of small occlusions. Figure 1 shows the algorithmic steps of AprilTag. In the first step, it computes the magnitude and direction of a gradient at every pixel in an image that contains the AprilTag. Afterward, these calculated gradients are grouped into clusters called components based on similar gradient attributes using a graph-based method. By using a weighted least square technique, a line is fitted on every component such that the direction of the gradients determines the direction of the fitted line. Moreover, gradient direction determines the direction of the line segments. Hence each line has a dark side on its left and a lighter side on its right. Furthermore, after identifying all lines, possible quad shapes are detected, as shown in step 3 of Figure 1. The quad shape with a valid code scheme is extracted out. Also, a 6-DOF pose of the tag in the camera frame of reference is returned by using homography and intrinsic estimation over an extracted tag.

3.2. Trajectory Generation

In all robotics applications, odometry is key to every operation. Odometry includes all the positions and poses of a moving robot along its timestamp. As discussed in the literature survey, AprilTag is widely used for odometry generation of robots both in indoor and outdoor applications. It is illustrated in the previous section, AprilTag returns a single pose in 6-DOF relative to the camera frame of reference. Moreover, as the camera mounted on a robot changes its position along with the motion of a robot, it produces a series of posses from AprilTag at each time instance. Each pose shows the position of a robot along the moving robot trajectory at a particular time instance. Furthermore, to have a continuous trajectory we proposed a standard transformation technique between any two consecutive poses as shown in Figure 2. Suppose, we get a 6-DOF pose from an AprilTag in camera frame of reference, hence the camera attached frame, described in the Tag frame of reference τ = ( x ¯ , y ¯ , z ¯ ) at instance i, is given by the homogeneous transformation:   
T i τ = R i d i 0 1 4 × 4 , d i = p i x p i y p i z T R i = c ϕ c θ c θ s ψ s ϕ c ψ s θ c ψ c θ s ϕ + s ψ s θ c ϕ s θ c ψ c θ + s ψ s ϕ s θ c θ s ψ + c ψ s ϕ s θ s ϕ c ϕ s ψ c ψ c ϕ .
In Equation (1), the input angles are in camera frame of reference such as ‘ θ ’ is the rotation about z i -axis and represents the roll motion of the camera, ‘ ϕ ’ is the rotation about y i -axis and represents the yaw motion of the camera and ‘ ψ ’ is the rotation about x i -axis and represents the pitch motion of the camera where i = 0 , 1 , 2 , . . . n . p i x , p i y , p i z are the displacements in x-axis, y-axis and z-axis respectively in camera frame of reference. To have a trajectory in a single frame of reference, we need to find transformation T i i + 1 from point p i to p i + 1 .
T i i + 1 = T τ i + 1 × T τ i 1 , where T 1 = R T R T d 0 1 .
In practice, AprilTag is detected at 10 HZ and the trajectory becomes almost continuous due to slow camera movement.

3.3. Error Measurements Setup

To analysis, the accuracy and precision of AprilTag, raw readings from AprilTag’s native implementation have been compared with the readings of a high precision localization system called V i c o n M X F 40 , also known as “Motion Caption (MoCap)” [3]. MoCap consists of 16 high frame-rate cameras placed at the different known positions in an indoor environment. The system optically tracks a passive marker in 6-DOF with the sub-centimeter precision. It also has the capability of tracking multiple passive markers. This system is considered as a benchmark for all indoor localization problems. Multiple monocular cameras which are calibrated at known positions, fuse the optical tracking data of a marker to track at good accuracy as shown in Figure 3.
AprilTag technology requires accuracy to the centimeter for the analysis, which is why we used a Motion Capture System (MoCap) for ground truth measurements. As described earlier, MoCap is an optical system that detects a passive marker; hence, a passive marker has been mounted above the camera to detect the position and orientation of the camera. For AprilTag localization, the measurement process has been made simple by making the origins of both AprilTag and MoCap frame of references aligned. Also, the camera is placed over a robotic platform that moves randomly around and the mounted camera has a constant motion of maximum 30 around its yaw axis to include maximum noise possible at a given nominal reference point. Table 2 shows the overall performance of the MoCap for estimating the robot’s position on the ground. Column 1 ‘ x r ’ and column 2 ‘ y r ’ in Table 2 show the nominal positions of reference points from which the measurements have been taken. Column 3 ‘N’ represents the total number of readings taken at a specific reference point, column 4 ‘ μ x ¯ ’ and column 5 ‘ μ y ¯ ’ shows the accuracy of the MoCap as the reported mean value in both x-axis and y-axis respectively. Lastly, column 6 ‘ σ x ¯ 2 ’ and column 7 ‘ σ y ¯ 2 ’ shows the precision of the MoCap in the form of variances reported in x-axis and y-axis respectively.
Moreover, Figure 4 shows the error plot for both x ¯ -axis and y ¯ -axis of MoCap. In Figure 4, the horizontal axis shows the x ¯ and y ¯ component of MoCap measurements in the left and right plot respectively and the vertical axis shows the accuracy after subtracting mean value from the measurements. Besides, the measurements are taken at different distances from the AprilTag in y ¯ -axis, this information is coded in three colors such as red represents a distance of 30 cm, blue represents a distance of 50 cm and the green represents a distance of 70 cm. Plots in Figure 4 show that x ¯ component of MoCap measurements are more effected by viewing distance then y ¯ component. Further, as we move towards either the left or right side from the center of the tag, accuracy decreases.
For the theoretical point of reference in experiments, we have used an error measurement markings to get a rough estimation regarding the position of the camera from the AprilTag, as shown in Figure 5. The design of the measurement experiments is illustrated in Figure 6, which shows how the readings have been taken to observe the actual inaccuracy caused by the various parameters such as distance and camera viewing angle. For the rest of the paper, analysis measurements are in a plane only, namely x-axis x, z-axis z and yaw angle ϕ in the camera frame of reference. The output of the measurements is represented in a 3-DOF AprilTag frame of reference τ with variables x ¯ , y ¯ and θ ¯ . In Figure 6, crosses represent the locations of the robot from which the readings are noted. This measurement technique is common for both the MoCap and the AprilTag data recordings for evaluation. These positions are obtained from the nominal reference points marked on an error measurement setup. The error measurement setup consists of a large paper sheet marked with angles and distances from the origin of AprilTag. At every nominal measurement point on error measurement setup, viewing yaw angle ϕ of the camera can be different. When the camera is pointing directly towards the center of the AprilTag, the yaw angle is 90 . If the camera is pointed towards the right side of the center, the yaw angle is 90 + ϕ and for left, the yaw angle is 90 ϕ . We have taken measurements at 9 nominal points: three exactly in front of the tag and three on either side. The reason for selecting specific measurement points is to include maximum uncertainty in measurements for the viewing angles and the distances. Moreover, due to the limited field of view of the camera and the workspace environment, we keep the nominal points to 9 points. These points are uniformly covering each side and the face of the tag. Raw measurements at different angles and distances from AprilTag have been plotted and compared against the ground truth measured by MoCap.
It is observed that the ideal scenario for AprilTag accuracy is when the camera is pointing towards the center of the tag or camera z-axis lies toward the center of AprilTag. Figure 7 shows the plots from raw measurements when the camera z-axis lies toward the center of AprilTag. The center of the tag is taken as ( x ¯ , y ¯ ) = ( 0 , 0 ) in AprilTag frame. These measurements are of the best accuracy that one can achieve from AprilTag and are used later as a reference. Similarly, Figure 7 also shows blue readings for which the camera z-axis does not lie towards the center of AprilTag. It can be seen that blue readings incur large inaccuracy when the camera is wider.
Table 3 summarizes the statistics of the measurements when the camera is pointing towards the center of the tag. For all readings, the yaw angle ϕ = 90 means it is directed towards the center of AprilTag. First two columns ‘ x r ’ and ‘ y r ’ show the x-axis and y-axis of nominal reference points where we wanted to place the camera. Third column ‘ x ¯ ’ and forth columns ‘ y ¯ ’ show ground truth values on the desired reference points using MoCap. Fifth column ‘N’ represents total number of readings taken at that reference point ( x r , y r ) . Sixth column ‘ μ x ¯ ’ and seventh column ‘ μ y ¯ ’ give the mean in x ¯ -axis readings and y ¯ -axis. The eighth column ‘ σ x ¯ 2 ’ and ninth columns ‘ σ y ¯ 2 ’ show variances in x ¯ -axis and y ¯ -axis respectively. We get an average mean error of around 1.0 cm for x ¯ and around 0.40 cm error for y ¯ over a variable distance of ± 6 cm, ± 15 cm and ± 70 cm in x ¯ -axis and 30 cm, 50 cm and 70 cm in y ¯ -axis.
Figure 8 shows the mean error plot for the Table 3 using nominal reference points. We can see that the error is minimum for both x ¯ and y ¯ exactly in front of AprilTag. As we move along the left or right side, the error increases. Another notable finding is that the error increases as we increase the camera distance from the AprilTag along y ¯ -axis.
Instead of pointing the camera towards the center of AprilTag, if we fix the camera such that its z-axis never points towards the center of the AprilTag, one gets the worst readings in terms of accuracy no matter which side of the tag the camera is located. To empirically analyze this concept, an experiment is performed in which measurements are taken at fixed measurement points with the varying camera yaw angle ϕ ranging from 70 to 110 . Figure 9 shows the data plot of AprilTag with changing camera yaw angle ‘ ϕ ’. Here, the spread of data around a measurement reference point is in a circular path distributed almost evenly on both sides. Other than the plot representation, Table 4 shows the statistics of the reported data in terms of mean and variance in both measurement axis. As shown in Table 4, the variance ( σ x ¯ 2 , σ y ¯ 2 ) and mean ( μ x ¯ , μ y ¯ ) values of both x ¯ and y ¯ have increased manifold especially when x ¯ = ± 20 cm and y ¯ = 70 cm.
Additionally, Table 4 shows the mean value of all the measurements taken at a particular reference point with changing camera yaw angle ‘ ϕ ’. As Figure 9 shows, that the data spread is distributed almost evenly around the reference point along a circular path. In other words, it shows that for a particular reference point, as the camera yaw angle ‘ ϕ ’ changes, the reporting position also changes along the circular path of the distribution. Considering, the spread of data distribution is almost same on either side of the reference point, hence we get the mean values ( μ x ¯ , μ y ¯ ) relatively near to the measurement reference point itself. To further analyze the worst possible case of camera yaw angle ‘ ϕ ’, a similar experiment has been conducted with camera yaw angle ‘ ϕ ’ fixed to 110 . Table 5 shows the statistical analysis of the experiment with the camera yaw axis fixed at ‘ ϕ ’= 110 . Table 5 shows that the inaccuracy has increased in the mean values ( μ x ¯ , μ y ¯ ) especially in x ¯ -axis. This is because the resulting measurements at ‘ ϕ = 110 ’ lie at the farthest sides of the circular spread shown in Figure 9. Moreover, Figure 10 shows the error plot for Table 5 against the ground truth(MoCap). It shows that the error is minimum at x ¯ = 0 but increases significantly as we move along the sides. For y ¯ = 70 cm, the error is around 16 cm for x ¯ = ± 20 cm whereas at x ¯ = 0 cm, the error is only around 2 cm.
Based on raw AprilTag measurements, the following shortcomings are identified in the current AprilTag implementation.

Distance from Tag:

It is observed that the accuracy decreases over distance as we move the camera away from the tag. As shown in Table 3 and Table 5, we can see the mean and variance for both x ¯ and y ¯ are increasing with increase in distance from the tag in z-axis.

Viewing Angle:

From multiple experiments, it is understood that the accuracy also decreases as the camera position changes from front to sideways. In the ideal scenario (Table 3), though the camera is pointing toward the center of the AprilTag at all the points, the error is less for x ¯ = 0 as compared to x ¯ 0 . This error increases as we increase x ¯ . Table 5 shows a similar pattern.

Yaw Angle of the Viewing Camera:

Previous extensive experiments show that the main source of inaccuracy is the frame inconsistency caused due to motion and significantly reduces the performance. The reason is that AprilTag fiducial system is coded in such a way that the output frame of reference is dependent upon the yaw angle ϕ orientation of the camera attached to the moving body. As the orientation of the moving body changes, the output frame also changes, making it hard to have a consistent frame of reference. At any given point, in x and z camera coordinates, change in yaw angle ϕ causes the generation of a new origin hence making a new frame of reference for every yaw angle. The new origin is the intersecting point of the AprilTag face plane with a straight line ‘z-axis’ from the center of the camera. So the current distance is reported under newly formed origin. Though the resulting output is relatively accurate in its respective frame of reference, the overall accuracy of all the yaw angles ϕ combined against a constant frame of reference is inaccurate. Figure 9 shows the plot of AprilTag reporting at fixed measurement points with varying yaw angle ϕ ranging from 70 to 110 . Variance and mean readings of both x ¯ and y ¯ have also increased many folds as shown in Table 4 especially when x ¯ = ± 20 cm and y ¯ = 70 cm.

4. Improvement Techniques

Based on the measurement analysis of the AprilTag system, the following improvement techniques have been proposed.

4.1. Passive Correction for Frame Consistency

As illustrated in Section 3.3, the key source of inaccuracy in AprilTag readings is the misalignment of the camera z-axis with the center of the tag. When the camera follows a certain trajectory, its orientation may change over time, which causes inconsistency between two consecutive frames. To solve this problem, we propose a passive-orientation correction. Also referred to as a “Soft Yaw Axis Correction (SYAC)” technique. In this technique, the geometry of the whole setup is modified in a way that the axis (z-axis) passing through the center of the camera always points towards the tag’s origin that is, ( x ¯ , y ¯ ) = ( 0 , 0 ) . Figure 11 shows the drawing for trigonometric correction. The solid triangle shows the original geometry without any correction in the camera frame of reference. The hypotenuse of a solid triangle z, which emerges from the camera center, should touch the center of tag. That ideal line is called z ¯ , depicted as a dotted line in Figure 11. Moreover, ϕ is known, the angle ω ´ that aligns the dotted triangle hypotenuse with the center is calculated. By using simple trigonometry, z ¯ and ω ´ is calculated as:
ω ´ = ϕ tan 1 z sin ϕ x + z cos ϕ ,
z ¯ = ( z sin ϕ ) 2 + ( x + z cos ϕ ) 2 .
Once z ¯ and ω ´ are known, x ¯ Equation (5), y ¯ Equation (6) and θ ¯ Equation (7) are derived which eventually improves the accuracy.
x ¯ = x + x ´ = x + z cos ϕ ,
y ¯ = z sin ϕ ,
θ ¯ = arctan y ´ x + x ´ = arctan z sin ϕ x + z cos ϕ .
Figure 12 shows the data scatter plot after applying this passive correction technique. It can be seen that the spread of the transformed data is decreased and Table 6 shows decreased variance both in x ¯ and y ¯ axis. By zooming point ( x , z ) = ( 0.20 , 0.70 ) , it can be observed that the original readings are displaced only after applying the correction, making them more closely to the reference point. At camera yaw angle of 110 , it is almost 40 cm off the true position in x ¯ -axis and 2 cm in y ¯ -axis. After applying the correction, the error in x ¯ -axis is reduced to 5 cm and in y ¯ -axis to 1 cm. Similarly, at yaw angle, 70 in x ¯ -axis, the error is reduced from 26 cm to 4 cm.
To further extend the comparison, Figure 13 shows the improvement of AprilTag readings concerning the camera yaw angle ’ ϕ .’ The rotation of the camera around its yaw axis is limited to five sampling angles that is, 70 , 80 , 90 , 100 and 110 . The rate of rotation for yaw angle ’ ϕ ’ is 10 degrees/sec. Hence, it takes the camera 5 seconds to sweep in one direction. Moreover, AprilTag is being detected at 11 Hz; hence, we have approximately 11 readings at an individual yaw angle ϕ during a single sweep. As Figure 13 shows that at each measurement angle ’ ϕ ,’ our proposed Soft Yaw Axis Correction approach (SYAC) has significantly improved the accuracy of raw AprilTag. Red cross ( x ¯ , y ¯ ) = ( 20 , 70 ) shows the ground-truth value for the whole experiment. As we can see from the plot that the accuracy of AprilTag decreases as we increase the yaw axis angle ’ ϕ ’ of the camera. The accuracy is worse when ’ ϕ ’ is either 110 or 70 . As the camera yaw angle ’ ϕ ’ approaches 90 , which implies the camera’s z-axis points towards the center of the tag, accuracy increases. As a result, Figure 13 shows data at ’ ϕ = 90 ’ most accurate.

4.2. Active Correction with a Yaw Axis Gimbal

Another way to correct for misalignment of camera z-axis with the center of the tag is to track and correct it in real-time using a yaw axis gimbal actively. The custom-built hardware setup is proposed to achieve this, as shown in Figure 14. The tracking Algorithm 1 consists of a Proportional-Integral-Derivative (PID) based action tracking controller working at 10Hz. The input to the Algorithm 1 is a raw yaw angle of the camera ‘ ϕ ’ in the camera frame of reference reported by native AprilTag implementation. As discussed in Section 3.3, the yaw angle of the camera ‘ ϕ ’ depends upon the alignment of the camera z-axis with the center of AprilTag. If the camera z-axis lies on the center of the AprilTag, ‘ ϕ ’ is equal to zero. As the face of the camera moves away from the center of the AprilTag, ‘ ϕ ’ value changes and introduces inaccuracy. Moreover, the goal of the Active Correction with a Yaw Axis Gimbal (ACYG) is to keep the z-axis of the camera aligned towards the center of the tag by keeping angle ‘ ϕ ’ equals to zero. Significant improvement in the tag’s precision has been observed with almost similar accuracy by using this technique. One of the problems with the Passive Correction for Frame Consistency technique is that it does not align the camera center accurately towards the center of the tag if the yaw angle ϕ is too big. Therefore, active compensation in combination with a passive correction ensures that the camera yaw angle does not become too big. Figure 15 shows the effect of one-axis tracking gimbal in the combination of passive frame consistent Correction and without passive frame consistent Correction. Data scatter plots show the accuracy has increased significantly, especially in combination with SYAC. Table 7 and Table 8 summarize the variances and mean values while using yaw axis gimbal with raw AprilTag and with SYAC correction, respectively.
Figure 15 shows that though active correction has improved the overall accuracy. If this technique is applied with a combination of passive correction (SYAC), the resultant readings are more accurate. The reason behind this is that both the correction methods have their limitations. In Soft correction (SYAC), sometimes the camera z-axis fails to align with the tag center if the measured yaw angle ’ ϕ ’ of the viewing camera is too large. Moreover, in Active correction, the tag is being detected at 11 Hz and the active correction is being done at 8 Hz to due system limitations. Hence, this results in the incursion of inaccuracy in the active correction system. Henceforward, by using both correction techniques in combination with each other improves the overall accuracy manifolds.
Algorithm 1 Active Camera Tracking of AprilTag Center.
Input: camera yaw angle ’ ϕ
Output: servo angle ’ Γ
1:
K P Propotional gain depending upon z-axis value.
2:
K I Integral gain depending upon z-axis value.
3:
K D Propotional gain depending upon z-axis value.
4:
ϵ ← Initialize error with zero
5:
T ← 0.05
▹ Servo stopping threshold.
6:
α ← 0.008
▹ Smoothing factor.
7:
ϵ ϕ
8:
if ϵ > T then
9:
Integral ← Integral + ϵ
10:
else Integral ← 0.00
11:
end if
12:
P ← ϵ × K P
13:
I ← Integral × K I
14:
D ← (LastYawAngle − CurrentYawAngle) × K D
15:
Drive ← P + I + D
16:
Drive ← Drive × α
17:
if Drive > 90 then
▹ To keep camera facing AprilTag
18:
    Drive ← 90
19:
else Drive < -90
20:
    Drive ← -90
21:
end if
22:
Γ ← CurrentServoAngle + Drive
23:
LastYawAngle ← CurrentYawAngle
24:
CurrentServoAngle ← Γ
25:
return Γ

4.3. Comparative Results

Following the extensive experimentation and dataset collection, comparative results have been deducted to show the improvement more comprehensively. Table 9 shows the error comparison of different approaches against AprilTag. Here error is represented as the difference between the reported mean value and the ground truth both in x ¯ and y ¯ axis. Columns 1 and 2 show the ground truth (MoCap) values of x ¯ and y ¯ , respectively, against which the standard error is compared. Columns 3 and 4, which are labeled as “Raw AprilTag readings (camera pointing towards tag’s center),” show the error for the AprilTag system when the camera is always pointed towards the center of tag. Earlier experiments have shown that this is the maximum possible accuracy one can achieve using AprilTag. Moreover, columns 5 and 6 “Raw AprilTag readings (camera pointing away from tag’s center)” shows the raw data from AprilTag when the camera z-axis is not aligned with the center of the tag resulting in inconsistent frames induced by camera motion. Columns 7 and 8, which are labeled as “Applying Soft Yaw Angle Correction (SYAC) on raw AprilTag readings,” shows mean error after applying the proposed approach of Soft Yaw Angle Correction (SYAC) to make the inconsistent frames consistent. Similarly, columns 9 and 10 labeled as “Applying Active Correction with Yaw Axis Gimbal on raw AprilTag readings” show error after using custom build yaw axis gimbal on raw AprilTag system. Lastly, columns 11 and 12, which are labeled as “Applying (SYAC + Active Yaw Axis Gimbal correction) on raw AprilTag readings,” show an error when both proposed approaches of soft and active yaw axis correction are applied in combination.
Additionally along with the accuracy, precision of the AprilTag system has also been increased manifolds by our proposed approaches as shown in Figure 16 and Figure 17. Figure 16 shows the resulting precision of different approaches in c m for a nominal reference point of ( x ¯ , y ¯ ) = ( 0 , 70 ) . We can see that the spread of x ¯ for raw AprilTag data with camera’s z-axis not pointing towards tag’s center is around 13.9 cm while after applying Soft Yaw Angle Correction (SYAC) plus Active Yaw Axis Gimbal Correction, it is decreased to 1.27 cm. Moreover for y ¯ , the spread is decreased from 1.61 cm to 0.22 cm. In addition, Figure 17 shows the similar analysis for nominal reference point of ( x ¯ , y ¯ ) = ( 20 , 70 ) . Here after applying Soft Yaw Axis Correction (SYAC) and Active Yaw Axis Gimbal Correction on AprilTag, the precision has improved manifolds and the data spread for x ¯ and y ¯ is decrease from 12.04 cm to 0.84 cm and 3.74 cm to 0.65 cm respectively. Nonetheless, Motion Capture (MoCap) spread has also been illustrated for both the nominal reference points for ground-truth analysis.
As mentioned earlier, the objective is to reduce the measurement error close to the ground truth. Figure 18 shows the statistical analysis of accuracy by plotting Mean Root Square Error (RMSE) achieved by the proposed approaches against the raw AprilTag. It shows that our proposed approaches have significantly reduced the RMSE as compared to bare AprilTag results. Moreover, this error is further reduced when both the proposed approaches are combined. The resulting error is significantly close to ground-truth and the ideal scenario when the camera is pointing towards the center of AprilTag hence achieving our objective.
Furthermore, results from Table 9, Figure 16 and Figure 17 have shown that we can achieve significant improvements in the accuracy and precision of the AprilTag by the slight trade-off with execution time. Table 10 shows an average execution time for a single input frame for different approaches. Nevertheless, the raw implementation of AprilTag has the quickest execution time as compared to the proposed approaches but the difference is not significant. As illustrated by Table 10, a combination of both proposed methods (Passive yaw axis correction + Active correction with yaw axis gimbal) can achieve a maximum operating frequency of approx. 4 Hz, which is acceptable for most of the robotics applications.

4.4. Probabilistic Sensor Model for AprilTag

The third contribution of this paper is the development of a forward probabilistic sensor model p ( Y | X ) for the AprilTag. The model is based on the collected measurement data and works for all the locations. These locations include both direct and indirect measurement points. Hence, it makes the empirical analysis of the current work applicable to a probabilistic decision-theoretic framework. With reference to Figure 2 and Figure 11, the true state X of the robot is given by the tuple X = [ x ¯ , y ¯ , θ ¯ ] T . The measurement vector Y is also a triplet Y = [ z , x , ϕ ] T . The measurements are assumed to be a nonlinear transformation of the true state, corrupted by some additive sensor noise, Y = F ( X ) + ε . Explicitly, these relations can be written as:
z = ( z ¯ cos θ ¯ x ) 2 + ( z sin ϕ ) 2 + ε z ,
x = z ¯ cos θ ¯ z cos ϕ + ε x ,
ϕ = arctan z ¯ sin θ ¯ z ¯ cos θ ¯ x + ε ϕ .
We are interested in finding the joint probability distribution of the measurement vector given the true states p z , x , ϕ | x ¯ , y ¯ , θ ¯ . In order to find the above mentioned probability, Bayes’ theorem is applied to obtain:
p x ¯ , y ¯ , θ ¯ | z , x , ϕ = 1 J p z , x , ϕ | x ¯ , y ¯ , θ ¯ p ( x , z , ϕ ) ,
where J is a constant that can be factored out. Since we have no prior distribution p ( x , z , ϕ ) , one can use a Maximum Likelihood Estimator for a uniform prior, that is, a simple inversion of the model to deduce the states from the measurements by using Equations (5)–(7).
Hence, if we have a model p x ¯ , y ¯ , θ ¯ | z , x , ϕ for all states, we can use it to localize at even points where we do not have measurement data. We achieve this using a Gaussian Process (GP) based regression method [44] as follows. First we make a simplifying assumption that Y are not mutually correlated. While this may not be factually true, we find below that this is sufficient for using the Tag in practice. (The extension of the framework to correlated sensor measurements is a work in progress.) Therefore, we focus on either of the measurement variable in Y as scalar nonlinear transformations f ( X ) . These are precisely the individual measurement equations given above. Using the notation introduced in Reference [44], we are interested in finding the distribution p ( f * t e s t | X , X * , Y * ) , where f * t e s t is a stochastic process for which x ¯ , y ¯ and θ ¯ has a joint Gaussian distribution, X = ( x ¯ , y ¯ , θ ¯ ) ) is the unknown test point where the distribution has to be calculated, X * are the ground truth points for training data X * { ( x ¯ i , y ¯ i , θ ¯ i ) } i = 1 N and Y * { ( x ¯ i , y ¯ i , θ ¯ i ) } j = 1 M are the data collected in experiments as output of AprilTag at training points X * .
In GP regression, we have to define a covariance function (or Kernel function) whose parameters (the so-called hyper-parameters) have to be tuned to best explain the data at hand. We have chosen a squared exponential covariance function, which is widely used because of its smoothness and differentiability:
k ( x a , x b ) = α exp | x a x b | 2 2 β
where α and β are the hyper-parameters of the kernel function.
The GP regression methodology makes the assumption that the training output Y * and test output f * t e s t have a joint Gaussian distribution.(Once again, this is a simplifying assumption that may be invalid in practice but works in practice.)
Y * f * t e s t N 0 , K ( X * , X * ) K ( X * , X ) K ( X , X * ) K ( X , X )
where K ( X * , X * ) is a N × N matrix defined by covariance function (kernel) evaluated at every training point X * against each training point X * . K ( X , X ) is an M × M matrix defined by kernel evaluated at every test point X against each test point X. K ( X * , X ) is an N × M matrix formed by the kernel evaluated at every training point X * against each test point X. Further details can be seen in a standard references on GP (e.g., Reference [44]).
Here, we are only interested in incorporating the knowledge provided by the training data X * regarding distribution functions other than drawing random functions from prior knowledge. So we will restrict the joint prior distribution to contain only those functions which agrees with the observed data points Y * to get the posterior distribution over functions. In other words, we will reject all those functions generated from prior that disagrees with the observations. In probabilistic terms, this can be achieved by marginalizing the observations over the joint distribution to get the predicted distribution as p ( f * t e s t | X , X * , Y * ) N ( μ , Σ ) , where
μ = K ( X , X * ) ( K ( X * , X * ) + σ A 2 I ) 1 Y * , Σ = K ( X , X ) K ( X , X * ) ( K ( X * , X * ) + σ A 2 I ) 1 K ( X * , X ) ) ,
where σ A 2 is the noise variance for the particular AprilTag measurement variable under consideration. The process is repeated for all three measurement variables to regress the distribution for all measurement-state pairs. The results of the regression have been summarized in Table 11.

Experimental Verification of Sensor Model

To verify the validity of our proposed AprilTag sensor model, we have used our sensor model in various settings to estimate the state of a robot. We assume a standard odometry model in which robot can rotate around its axis and can move forward [45]. We have performed both indoor and outdoor experiments to validate our proposed sensor model.
For indoor experiment, at any time step t state vector X t is given by X t = [ x ¯ , y ¯ , θ ¯ ] T , where x ¯ shows the movement of robot in x-axis, y ¯ show is the movement in y-axis and θ ¯ shows the rotation around robot‘s own axis. Our goal is to find p ( X t | X t 1 , u t , z t ) where X t 1 is the robot state in previous time stamp, u t is the current input command and z t is the current sensor measurement.
We have used Monte Carlo simulation technique [46] to estimate the position and pose of the robot since it does not require any prior knowledge for data distribution. In this method, k number of particles are randomly generated around an initial starting point X i with certain initial uncertainty based upon system
X p N x i y i θ i , σ x x 2 0 0 0 σ y y 2 0 0 0 σ θ θ 2 ,
where X p are the randomly generated particles and p { 1 , . . . , k } , x i is the initial value for x-axis, y i is the initial value for y-axis, θ i us the initial angle and σ x 2 , σ y 2 , σ θ 2 are the initial variance in x-axis, y-axis and θ respectively. Then, each particle is propagated forward based upon the motion model assumed
X t = f t ( X t 1 ) + n = f t ( x t 1 , y t 1 , θ t 1 ) + n ,
where f t is a function representing motion model of the system and n is the Gaussian noise. Then observation model is applied on each propagated particle to get observation measurements as z ^ t . Then these observation measurements are weighted against the measurement data from the sensor z t . Each particle is assigned a probabilistic weight based upon how close it is to the measurement after applying observation model as
P w e i g h t p = 1 ( 2 π ) 3 det R * e x p 1 2 ( z t z ^ t ) R 1 ( z t z ^ t ) T , R = r x 0 0 0 r y 0 0 0 r θ ,
where p { 1 , . . . , k } represents number of particles, R is a 3 × 3 co-variance matrix. Then the assigned probability weights are normalized such that their sum is equal to 1 as
P C D F = P w e i g h t p n = 0 k P w e i g h t n ,
where P C D F p is the cumulative distribution of the probability density of weighted vector P w e i g h t p . Then weighted particles are re-sampled for the next step by uniformally sampling from the cumulative distribution as shown in Equation (19). Since the particles are being selected by statistical probabilities, so on average, particles with the greater weights are being selected.
X p = P C D F 1 h , where h U 0 , 1 .
After getting new particles, the whole process is repeated for m o times where m o is the total number of tag observations observed in an experiment. At every step, the average of all the particles is considered to be the true position of the robot. This algorithm relies on the survival of the fittest philosophy. Those particle which are close to the sensor measurement are weighted higher then others give them the chance to be selected again for the next round.
In our experiment, an incremental motion model has been used for the propagation of particles from one configuration c i to another configuration c f . Three parameters δ θ i , δ d and δ θ f have been used to encode the complete motion from one configuration to another. Input command u θ i maps as rotation δ θ i of robot at initial configuration c i such that it faces final configuration c f . u d maps as the straight forward motion δ d from initial configuration c i to final configuration c f and u θ f maps the final rotation δ θ f at destination point for final pose angle. See Figure 19 which shows each parameter in detail.
For proposed sensor model verification using Monte Carlo simulation, we used 10 , 000 particles initially generated at a known starting point X i = [ x i , y i , θ i ] with initial variance of n i . As it is assumed earlier, our robot can only move forward and can rotate around its own axis, so motion model for each particle can be given by
X t = x t 1 + d f o r cos ( θ t 1 + δ θ i ) + n x y t 1 + d f o r sin ( θ t 1 + δ θ i ) + n y θ t 1 + δ θ i + δ θ f + n θ ,
where d f o r is the forward distance as a result input command u f o r , δ θ i is the angle of rotation at initial position as a result of input command u θ i , δ θ f is the rotation angle at final configuration point as a result of input command u θ f . n x is the Gaussian noise in x-axis, n y is the Gaussian noise in y-axis and n θ is the Gaussian noise in θ .
At any time t, measurement vector is given by z t = [ x t a g , y t a g , θ t a g ] T . In this experiment, we have assumed that x, y and θ are independent in nature. So for observation model, the proposed AprilTag sensor model as shown in Equation (14) has been used.
Figure 20 shows the trajectory generated by applying the particle filter empowered with our proposed AprilTag sensor model in comparison with the ground truth generated by MoCap. AprilTag‘s center is placed at ( x ¯ , y ¯ , θ ¯ ) = ( 0 , 0 , 90 ) over a calibrated setup and robot is moved in front of AprilTag in a rectangular shape. The rectangular shape is selected to have a better visualization of trajectory data and to see the loop closure. Figure 20 shows that the trajectory generated by the particle filter (red) is very close to the ground truth trajectory (green). The experiment shows that the particles converge very quickly because of the high precision of the system achieved by applying proposed techniques.
To further investigate the performance of our proposed sensor model, a similar experiment in a large outdoor environment has been performed. For this purpose, a larger AprilTag of size 305 × 305 cm fixed on the ground has been used, as shown in Figure 21. In this experiment, the robot has moved along an irregular path from the left side of the AprilTag to the right side as far as the tag is visually detectable and then back to the left side towards starting position, as shown in Figure 22. To show the significance of each proposed improvement, we have divided the experiment into two phases. In phase one, active tracking of AprilTag is not activated and only passive correction is done using the sensor model (red path). In phase two, active tag tracking is also activated along with the passive correction (blue path), as shown in Figure 22. Considering the experiment is in an outdoor environment, therefore ground truth trajectory can not be generated. Hence for ground truth verification, we have manually marked three validation points in meters that is, A ( x ¯ , y ¯ ) = ( 0 , 40 ) , B ( x ¯ , y ¯ ) = ( 26 , 30 ) and C ( x ¯ , y ¯ ) = ( 19 , 20 ) before the experiment and have deliberately passed through them. Figure 22 shows that the trajectory passes through the validation points.
Moreover previously proposed pose-indexed probabilistic sensor model in Equation (14) is regressed over an indoor small scale experimental data. The training points are at the maximum of 1 m from the AprilTag. Therefore, the model trained by using Gaussian Processes(GP) 14 is only valid for sub-meter trajectories. To make it workable in long distances, we have proposed a general sensor model with some scale factor d, where d is the distance of the camera from the tag along the y ¯ -axis. To calculate the scale factor d, we have used the equality as shown in Equation (21).
d = f × h r × I p h p × S s × c
where f is focal length of camera in mm, h r is the real height of the AprilTag in mm, I p is the height of image sensor in pixels, h p is the AprilTag height in pixels and S s is the image sensor‘s height in mm. c is a constant to change the unit scale. Since for outdoor experiment, wehave used meters as our unit of choice for distances, so we have used c = 1000 . After evaluating scale factor d, our general sensor model would become:
μ G = K G ( X , X * G ) ( K G ( X * G , X * G ) + σ A 2 I ) 1 Y * G , Σ G = K G ( X , X ) K ( X , X * G ) ( K G ( X * G , X * G ) + σ G A 2 I ) 1 K G ( X * G , X ) ) , K G = k G ( x a , x b ) = d 2 × α exp | x a x b | 2 2 × d 2 × β , X * G = X * × d , Y * G = Y * × d , σ G A 2 = σ A 2 × d 2 .
Here μ G and σ G is the mean value and variance for test point X using generalized sensor model respectively. K G is the generalized kernel, X * G is the generalized training point and Y * G is the generalized observed value. We have empirically tested and verified experimentally that the generalized sensor model gives almost same result at certain distance ’d’.
Figure 22 shows the trajectory generated by our generalized sensor model in an outdoor environment by using Monte Carlo Simulation. Figure 23 shows the axis-wise plot of raw AprilTag data (red) and the particle filter output (blue). It shows the filter is filtering the noise and improving the overall performance.

5. Conclusions

Fiducial markers are a low-cost solution for getting accurate ground truth measurements in applications, especially in robotics. Among all state of the art fiducial markers, AprilTag is the most commonly used fiducial marker by the researchers. Fast and sturdy tag detection techniques, stronger digital coding for an embedded marker, robust against different lighting conditions, lens distortion and small occlusion are the main features that make AprilTag unique from other fiducial markers. However, researchers have experienced that AprilTag lacks the required precision and accuracy for delicate tasks. Hence, researchers have used a different combination of sensors along the AprilTag to improve its accuracy. In this paper, we have empirically analyzed AprilTag with the identification of shortcomings causing inaccuracies. With the help of extensive experiments and analysis, we have analytically identified that the primary source of error is the yaw angle variation of the viewing camera, which has not been compensated in the current AprilTag implementation. Other inaccuracy sources include distance and viewing angle of the camera to tag.
Besides, based upon the identified shortcomings, three improvement approaches have been proposed to further improve the accuracy and precision of AprilTag with slight execution time trade-off. The first proposed approach includes passive correction of camera yaw angle by using trigonometric corrections to point the camera towards the center of AprilTag. The second approach uses a custom build hardware-based tracking gimbal to align the face of the camera towards the center of tag. Lastly, we have demonstrated how to use the experimental data to build a probabilistic model of the AprilTag sensor using Gaussian Processes (GP) Regression that can be reused as a forward sensor model in many localization-based applications. Also, we have demonstrated that the accuracy and precision of AprilTag increase manifolds if we use the proposed approaches in combination with each other. Comparative results with the Motion Capture (MoCap) system have been shown to best show the improvement proposed.
The suggested enhancement approaches can be used in multiple applications, including robotics and virtual reality (VR). We have experimentally tested the proposed approaches in both indoor and outdoor environments to show the completeness of the proposed probabilistic sensor model. Nonetheless, we have only analyzed horizontal, vertical and yaw axis accuracy reported by the AprilTag, which is sufficient for many ground-based localization applications. However, for more complex operations in 6-DOF environments like aerial robotics, other axes such as height, roll and pitch axis are also principal. Further work needs to be done in that direction.
Moreover, for hard real-time applications, we believe that the proposed custom build yaw-axis gimbal for active correction of camera yaw angle does not move fast enough to match the hard time constraints. The proposed approach can be further improved by using FPGA based implementation for a quick response. Furthermore, a servo motor can also be improved to increase the speed of tracking. Also, the theoretical framework for GP regression in this paper makes some assumptions and simplifications that need further investigation. Another possible open direction of future work might be the inclusion of multiple sensors tag in the probabilistic sensor model to enhance the performance further. Furthermore, by fusing the data of the Inertial Measurement Unit (IMU) while tracking AprilTag can enhance the performance. It may improve the robustness of the robot generated trajectory by filling the gaps when AprilTag is not detected. Nonetheless, there exist multiple directions for extension of this work, which we have attempted to make accessible to the robotics community for reuse in their research [47] and lay the exposition open to critical examination and investigation of the community.

6. Code & Dataset

Proposed improved AprilTag Code and datasets can be accessed/downloaded at http://cyphynets.lums.edu.pk/index.php/Apriltag.

Author Contributions

Conceptualization, S.M.A. and A.M.; methodology, S.M.A., S.A. and A.M.; software, S.M.A.; validation, S.M.A., S.A., K.B. and A.M.; formal analysis, S.M.A., S.A. and A.M.; investigation, S.M.A., S.A., K.B. and A.M.; resources, K.B. and A.M.; data curation, S.M.A.; writing–original draft preparation, S.M.A.; writing–review and editing, A.M.; visualization, S.M.A., S.A. and A.M.; supervision, K.B. and A.M.; project administration, S.M.A. and A.M.; funding acquisition, A.M.

Funding

This work is supported by German Academic Exchange Service (DAAD) and LUMS-FIF grants.

Acknowledgments

We would like to thank our lab support staff especially Allah Baksh for helping in outdoor experiments.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MAVMicro Aerial Vehicle
UAVUnmanned Aerial Vehicle
2DTwo Dimensional
3DThree Dimensional
DOFDegree of Freedom
MoCapMotion Capture System (Vicon MX F-49)
PIDProportional-Integral-Derivative
GPGaussian Processes
InCInconsistent
CConsistent
G+InCGimbal with inconsistent
G+ConGimbal with consistent

References

  1. Leonard, J.J.; Durrant-Whyte, H.F. Mobile robot localization by tracking geometric beacons. IEEE Trans. Robot. Autom. 1991, 7, 376–382. [Google Scholar] [CrossRef]
  2. Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT press: Cambridge, MA, USA, 2005. [Google Scholar]
  3. Moeslund, T.B.; Hilton, A.; Krüger, V. A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. 2006, 104, 90–126. [Google Scholar] [CrossRef]
  4. Fiala, M. Comparing ARTag and ARToolkit Plus fiducial marker systems. In Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and their Applications. IEEE, Ottawa, ON, Canada, 1–2 October 2005; pp. 148–153. [Google Scholar]
  5. Reina, G.; Vargas, A.; Nagatani, K.; Yoshida, K. Adaptive kalman filtering for gps-based mobile robot localization. In Proceedings of the 2007 IEEE International Workshop on Safety, Security and Rescue Robotics. IEEE, Rome, Italy, 27–29 September 2007; pp. 1–6. [Google Scholar]
  6. Olson, E. AprilTag: A robust and flexible visual fiducial system. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation. IEEE, Shanghai, China, 9–13 May 2011; pp. 3400–3407. [Google Scholar]
  7. Owen, C.B.; Xiao, F.; Middlin, P. What is the best fiducial? In Proceedings of the The First IEEE International Workshop Agumented Reality Toolkit, IEEE. Darmstadt, Germany, 29–29 September 2002; p. 8. [Google Scholar]
  8. Kato, H.; Billinghurst, M. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99). IEEE, San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar]
  9. Cho, Y.; Lee, J.; Neumann, U. A multi-ring color fiducial system and an intensity-invariant detection method for scalable fiducial-tracking augmented reality. In Proceedings of the In IWAR. Citeseer, San Francisco, CA, USA, 1 November 1998. [Google Scholar]
  10. López de Ipiña, D.; Mendonça, P.R.; Hopper, A. TRIP: A low-cost vision-based location system for ubiquitous computing. Pers. Ubiquitous Comput. 2002, 6, 206–219. [Google Scholar] [CrossRef]
  11. Fiala, M. ARTag, a fiducial marker system using digital techniques. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 590–596. [Google Scholar]
  12. Wagner, D.; Schmalstieg, D. Artoolkitplus for Pose Tracking on Mobile Devices. 2007. Available online: www.researchgate.net/publication/216813818_ARToolKitPlus_for_Pose_Tracking_on_Mobile_Devices (accessed on 30 October 2019).
  13. Xu, A.; Dudek, G. Fourier tag: A smoothly degradable fiducial marker system with configurable payload capacity. In Proceedings of the 2011 Canadian Conference on Computer and Robot Vision. IEEE, St. Johns, NL, Canada, 25–27 May 2011; pp. 40–47. [Google Scholar]
  14. Bergamasco, F.; Albarelli, A.; Rodola, E.; Torsello, A. Rune-tag: A high accuracy fiducial marker with strong occlusion resilience. In Proceedings of the CVPR 2011. IEEE, Providence, RI, USA, 20–25 June 2011; pp. 113–120. [Google Scholar]
  15. Edwards, M.J.; Hayes, M.P.; Green, R.D. High-accuracy fiducial markers for ground truth. In Proceedings of the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ). IEEE, Palmerston North, New Zealand, 21–22 November 2016; pp. 1–6. [Google Scholar]
  16. Sagitov, A.; Shabalina, K.; Lavrenov, R.; Magid, E. Comparing fiducial marker systems in the presence of occlusion. In Proceedings of the 2017 International Conference on Mechanical, System and Control Engineering (ICMSC). IEEE, St. Petersburg, Russia, 19–21 May 2017; pp. 377–382. [Google Scholar]
  17. Feng, C.; Kamat, V.R. Augmented reality markers as spatial indices for indoor mobile AECFM applications. Proceeding of 12th International Conference on Construction Applications of Virtual Reality (CONVR 2012), Taipei, Taiwan, 1–2 November 2012. [Google Scholar]
  18. Li, J.; Slembrouck, M.; Deboeverie, F.; Bernardos, A.M.; Besada, J.A.; Veelaert, P.; Aghajan, H.; Philips, W.; Casar, J.R. A hybrid pose tracking approach for handheld augmented reality. In Proceedings of the the 9th International Conference on Distributed Smart Cameras. ACM, Seville, Spain, 8–11 September 2015; pp. 7–12. [Google Scholar]
  19. Wang, J.; Sadler, C.; Montoya, C.F.; Liu, J.C. Optimizing ground vehicle tracking using unmanned aerial vehicle and embedded apriltag design. In Proceedings of the 2016 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, Las Vegas, NV, USA, 15–17 December 2016; pp. 739–744. [Google Scholar]
  20. Wang, K.; Phang, S.K.; Ke, Y.; Chen, X.; Gong, K.; Chen, B.M. Vision-aided tracking of a moving ground vehicle with a hybrid uav. In Proceedings of the 2017 13th IEEE International Conference on Control & Automation (ICCA). IEEE, Ohrid, Macedonia, 3–6 July 2017; pp. 28–33. [Google Scholar]
  21. Ling, K.; Chow, D.; Das, A.; Waslander, S.L. Autonomous maritime landings for low-cost vtol aerial vehicles. In Proceedings of the 2014 Canadian Conference on Computer and Robot Vision. IEEE, Montreal, QC, Canada, 6–9 May 2014; pp. 32–39. [Google Scholar]
  22. Zhang, Y.; Yu, Y.; Jia, S.; Wang, X. Autonomous landing on ground target of UAV by using image-based visual servo control. In Proceedings of the 2017 36th Chinese Control Conference (CCC). IEEE, Dalian, China, 26–28 July 2017; pp. 11204–11209. [Google Scholar]
  23. Jiaxin, H.; Yanning, G.; Zhen, F.; Yuqing, G. Vision-based autonomous landing of unmanned aerial vehicles. In Proceedings of the 2017 Chinese Automation Congress (CAC). IEEE, Jinan, China, 20–22 October 2017; pp. 3464–3469. [Google Scholar]
  24. Tang, D.; Hu, T.; Shen, L.; Ma, Z.; Pan, C. AprilTag array-aided extrinsic calibration of camera–laser multi-sensor system. Robot. Biomim. 2016, 3, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Ramirez, E.A. An Experimental Study of Mobile Device Localization. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2015. Available online: https://dspace.mit.edu/handle/1721.1/98770 (accessed on 30 October 2019).
  26. Parkison, S.A.; Psota, E.T.; Pérez, L.C. Automated indoor RFID inventorying using a self-guided micro-aerial vehicle. In Proceedings of the IEEE International Conference on Electro/Information Technology. IEEE, Milwaukee, WI, USA, 5–7 June 2014; pp. 335–340. [Google Scholar]
  27. Raina, S.; Chang, H.Y.; Sarkar, S.; Chen, M.N.; Cai, Y. An Integrated System for 3D Pose Estimation in Cluttered Environments. Available online: https://mrsd.ri.cmu.edu/wp-content/uploads/2017/07/Team8Report.pdf (accessed on 30 October 2019).
  28. Maragh, J.M. Dynamic Tracking With AprilTags for Robotic Education. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2013. Available online: https://dspace.mit.edu/handle/1721.1/83725 (accessed on 30 October 2019).
  29. Zake, Z.; Caro, S.; Roos, A.S.; Chaumette, F.; Pedemonte, N. Stability Analysis of Pose-Based Visual Servoing Control of Cable-Driven Parallel Robots. In Proceedings of the International Conference on Cable-Driven Parallel Robots, 30 June–4 July; Springer: Krakow, Poland, 2019; pp. 73–84. [Google Scholar]
  30. Florea, A.G.; Buiu, C. Sensor Fusion for Autonomous Drone Waypoint Navigation Using ROS and Numerical P Systems: A Critical Analysis of Its Advantages and Limitations. In Proceedings of the 2019 22nd International Conference on Control Systems and Computer Science (CSCS). IEEE, Bucharest, Romania, 28–30 May 2019; pp. 112–117. [Google Scholar]
  31. Britto, J.; Cesar, D.; Saback, R.; Arnold, S.; Gaudig, C.; Albiez, J. Model identification of an unmanned underwater vehicle via an adaptive technique and artificial fiducial markers. In Proceedings of the OCEANS 2015-MTS/IEEE Washington. IEEE, Washington, DC, USA, 19–22 October 2015; pp. 1–6. [Google Scholar]
  32. Fuchs, C.; Neuhaus, F.; Paulus, D. 3D pose estimation for articulated vehicles using Kalman-filter based tracking. Pattern Recognit. Image Anal. 2016, 26, 109–113. [Google Scholar] [CrossRef]
  33. Kalman, R.E. A new approach to linear filtering and prediction problems. J. Basic Eng. 1960, 82, 35–45. [Google Scholar] [CrossRef] [Green Version]
  34. Mueggler, E.; Faessler, M.; Fontana, F.; Scaramuzza, D. Aerial-guided navigation of a ground robot among movable obstacles. In Proceedings of the 2014 IEEE International Symposium on Safety, Security, and Rescue Robotics (2014). IEEE, Hokkaido, Japan, 27–30 October 2014; pp. 1–8. [Google Scholar]
  35. Xie, Y.; Shao, R.; Guli, P.; Li, B.; Wang, L. Infrastructure based calibration of a multi-camera and multi-lidar system using apriltags. In Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, Changshu, China, 26–30 June 2018; pp. 605–610. [Google Scholar]
  36. Nissler, C.; Marton, Z.C. Robot-to-Camera Calibration: A Generic Approach Using 6D Detections. In Proceedings of the 2017 First IEEE International Conference on Robotic Computing (IRC). IEEE, Taichung, Taiwan, 10–12 April 2017; pp. 299–302. [Google Scholar]
  37. de Almeida Barbosa, J.P.; Dias, S.S.; dos Santos, D.A. A Visual-Inertial Navigation System Using AprilTag for Real-Time MAV Applications. In Proceedings of the 2018 25th International Conference on Mechatronics and Machine Vision in Practice (M2VIP). IEEE, Stuttgart, Germany, 20–22 November 2018; pp. 1–7. [Google Scholar]
  38. Abawi, D.F.; Bienwald, J.; Dorner, R. Accuracy in optical tracking with fiducial markers: an accuracy function for ARToolKit. In Proceedings of the the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE Computer Society, Arlington, VA, USA, 2–5 November 2004; pp. 260–261. [Google Scholar]
  39. Wang, J.; Olson, E. AprilTag 2: Efficient and robust fiducial detection. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Daejeon, Korea, 9–14 October 2016; pp. 4193–4198. [Google Scholar]
  40. Jin, P.; Matikainen, P.; Srinivasa, S.S. Sensor fusion for fiducial tags: Highly robust pose estimation from single frame RGBD. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Vancouver, BC, Canada, 24–28 September 2017; pp. 5770–5776. [Google Scholar]
  41. Zhenglong, G.; Qiang, F.; Quan, Q. Pose Estimation for Multicopters Based on Monocular Vision and AprilTag. In Proceedings of the 2018 37th Chinese Control Conference (CCC). IEEE, Wuhan, China, 26–27 July 2018; pp. 4717–4722. [Google Scholar]
  42. Kayhani, N.; Heins, A.; Zhao, W.; Nahangi, M.; McCabe, B.; Schoelligb, A.P. Improved Tag-based Indoor Localization of UAVs Using Extended Kalman Filter. In Proceedings of the ISARC. International Symposium on Automation and Robotics in Construction, Banff, AB, Canada, 21–24 May 2019; Volume 36, pp. 624–631. [Google Scholar]
  43. Plungis, J. Self-driving cars: Driving into the future. Consum. Rep. 2017. Available online: https://velodynelidar.com/docs/news/Self-Driving%20Cars_%20Driving%20Into%20the%20Future%20-%20Consumer%20Reports.pdf (accessed on 30 October 2019).
  44. Rasmussen, C.E. Gaussian processes in machine learning. In Summer School on Machine Learning; Springer: Tübingen, Germany, 2003; pp. 63–71. [Google Scholar]
  45. Choset, H.M.; Hutchinson, S.; Lynch, K.M.; Kantor, G.; Burgard, W.; Kavraki, L.E.; Thrun, S. Principles of Robot Motion: Theory, Algorithms, and Implementation; MIT press: Cambridge, MA, USA, 2005. [Google Scholar]
  46. Thrun, S.; Fox, D.; Burgard, W.; Dellaert, F. Robust Monte Carlo localization for mobile robots. Artif. Intell. 2001, 128, 99–141. [Google Scholar] [CrossRef] [Green Version]
  47. Abbas, S.M. AprilTag Code & Datasets: For Analysis & Improvement 2019. Available online: http://cyphynets.lums.edu.pk/index.php/Apriltag (accessed on 31 October 2019).
Figure 1. Figure shows four steps of AprilTag detection algorithm with an input image of AprilTag of class 36H10.
Figure 1. Figure shows four steps of AprilTag detection algorithm with an input image of AprilTag of class 36H10.
Sensors 19 05480 g001
Figure 2. Trajectory using AprilTag detections. The trail of the transformation frame centers that constitute the trajectory is depicted in blue for various time instances. Here, p i x , p i y and p i z of Equation (1) (although not shown in the figure) depict the position of AprilTag in x i -axis, y i -axis and z i -axis in the respective camera frame of reference.
Figure 2. Trajectory using AprilTag detections. The trail of the transformation frame centers that constitute the trajectory is depicted in blue for various time instances. Here, p i x , p i y and p i z of Equation (1) (although not shown in the figure) depict the position of AprilTag in x i -axis, y i -axis and z i -axis in the respective camera frame of reference.
Sensors 19 05480 g002
Figure 3. Motion Capture (MoCap) setup at LUMS Biomechanics lab for AprilTag comparison.
Figure 3. Motion Capture (MoCap) setup at LUMS Biomechanics lab for AprilTag comparison.
Sensors 19 05480 g003
Figure 4. Accuracy plot for Motion Capture (MoCap).
Figure 4. Accuracy plot for Motion Capture (MoCap).
Sensors 19 05480 g004
Figure 5. Photographs from different views of the AprilTag error measurement setup. (Left): Shows the top-down view of the error measurement setup. (Middle): Shows the placement of the camera in front of the AprilTag over error measurement setup. (Right): Shows the side view of the measurement recording process.
Figure 5. Photographs from different views of the AprilTag error measurement setup. (Left): Shows the top-down view of the error measurement setup. (Middle): Shows the placement of the camera in front of the AprilTag over error measurement setup. (Right): Shows the side view of the measurement recording process.
Sensors 19 05480 g005
Figure 6. Error measurement setup showing measurement positions and yaw angles of the camera to AprilTag placed at the origin.
Figure 6. Error measurement setup showing measurement positions and yaw angles of the camera to AprilTag placed at the origin.
Sensors 19 05480 g006
Figure 7. Multiple raw AprilTag readings plotted for ideal (green) and worst (blue) scenarios. Mean ground-truth (MoCap) readings are plotted as red crosses.
Figure 7. Multiple raw AprilTag readings plotted for ideal (green) and worst (blue) scenarios. Mean ground-truth (MoCap) readings are plotted as red crosses.
Sensors 19 05480 g007
Figure 8. Error plot with camera’s z-axis pointed towards the center of AprilTag. (Left): Error plot for x ¯ -axis. (Right): Error plot for y ¯ -axis.
Figure 8. Error plot with camera’s z-axis pointed towards the center of AprilTag. (Left): Error plot for x ¯ -axis. (Right): Error plot for y ¯ -axis.
Sensors 19 05480 g008
Figure 9. Plot for measurements with changing camera yaw angle ‘ ϕ ’ for 70 ϕ 110 .
Figure 9. Plot for measurements with changing camera yaw angle ‘ ϕ ’ for 70 ϕ 110 .
Sensors 19 05480 g009
Figure 10. Error plot with camera yaw axis ‘ ϕ ’ fixed at 110 . (Left): Error plot for x ¯ -axis. (Right): Error plot for y ¯ -axis.
Figure 10. Error plot with camera yaw axis ‘ ϕ ’ fixed at 110 . (Left): Error plot for x ¯ -axis. (Right): Error plot for y ¯ -axis.
Sensors 19 05480 g010
Figure 11. Geometrically aligning subsequent frames.
Figure 11. Geometrically aligning subsequent frames.
Sensors 19 05480 g011
Figure 12. A comparison plot for AprilTag raw readings and improved SYAC measurements with changing camera yaw angle ‘ ϕ ’ for 70 ϕ 110 . Blue circles show the clustering of the plotted data around a ground truth point.
Figure 12. A comparison plot for AprilTag raw readings and improved SYAC measurements with changing camera yaw angle ‘ ϕ ’ for 70 ϕ 110 . Blue circles show the clustering of the plotted data around a ground truth point.
Sensors 19 05480 g012
Figure 13. An angle-wise comparison plot for AprilTag raw readings and improved SYAC measurements with changing camera yaw angle ‘ ϕ ’ for 70 ϕ 110 . Plot shows that our proposed technique has significantly improved AprilTag raw measurements.
Figure 13. An angle-wise comparison plot for AprilTag raw readings and improved SYAC measurements with changing camera yaw angle ‘ ϕ ’ for 70 ϕ 110 . Plot shows that our proposed technique has significantly improved AprilTag raw measurements.
Sensors 19 05480 g013
Figure 14. Yaw-axis gimbal hardware setup developed by the authors. A monocular camera has been mounted on a Dynamixal stepper motor, which is controlled by an Arduino Mega 2560 controller. The controller is used as a slave ROS process in localization application. Housing is in a 3D printed retrofit.
Figure 14. Yaw-axis gimbal hardware setup developed by the authors. A monocular camera has been mounted on a Dynamixal stepper motor, which is controlled by an Arduino Mega 2560 controller. The controller is used as a slave ROS process in localization application. Housing is in a 3D printed retrofit.
Sensors 19 05480 g014
Figure 15. Data scatter plot for geometrically consistent (SYAC) and non consistent frames(raw AprilTag) with custom-built yaw axis gimbal.
Figure 15. Data scatter plot for geometrically consistent (SYAC) and non consistent frames(raw AprilTag) with custom-built yaw axis gimbal.
Sensors 19 05480 g015
Figure 16. Comparison of resulting data spread (precision) from different approaches against the ground truth (Mocap) at nominal reference point straight in front of AprilTag i.e., ( x ¯ , y ¯ ) = ( 0 , 70 ) .
Figure 16. Comparison of resulting data spread (precision) from different approaches against the ground truth (Mocap) at nominal reference point straight in front of AprilTag i.e., ( x ¯ , y ¯ ) = ( 0 , 70 ) .
Sensors 19 05480 g016
Figure 17. With an oblique viewing angle i.e., ( x ¯ , y ¯ ) = ( 20 , 70 ) , a comparison of resulting data spread (precision) from different approaches against the ground truth (Mocap).
Figure 17. With an oblique viewing angle i.e., ( x ¯ , y ¯ ) = ( 20 , 70 ) , a comparison of resulting data spread (precision) from different approaches against the ground truth (Mocap).
Sensors 19 05480 g017
Figure 18. Root Mean Square Error (RMSE) comparison of raw AprilTag against proposed approaches and MoCap.
Figure 18. Root Mean Square Error (RMSE) comparison of raw AprilTag against proposed approaches and MoCap.
Sensors 19 05480 g018
Figure 19. Incremental motion model used between two configuration points c i and c f , encoded by three parameters δ θ i , δ d and δ θ f for Monte Carlo simulation.
Figure 19. Incremental motion model used between two configuration points c i and c f , encoded by three parameters δ θ i , δ d and δ θ f for Monte Carlo simulation.
Sensors 19 05480 g019
Figure 20. Trajectory comparison between MoCap and trajectory generated by Monte Carlo simulation using our proposed AprilTag sensor model.
Figure 20. Trajectory comparison between MoCap and trajectory generated by Monte Carlo simulation using our proposed AprilTag sensor model.
Sensors 19 05480 g020
Figure 21. (Left): Camera view of detected AprilTag (red polygon) in outdoors. (Right): Camera view of detected AprilTag (red polygon) in indoors. Both images show detection polygons along with the detected tag IDs based on implanted code.
Figure 21. (Left): Camera view of detected AprilTag (red polygon) in outdoors. (Right): Camera view of detected AprilTag (red polygon) in indoors. Both images show detection polygons along with the detected tag IDs based on implanted code.
Sensors 19 05480 g021
Figure 22. Trajectory generated using Monte Carlo Simulation in an outdoor environment.
Figure 22. Trajectory generated using Monte Carlo Simulation in an outdoor environment.
Sensors 19 05480 g022
Figure 23. Comparison of raw AprilTag data and proposed generalize sensor model based particle filter output along x ¯ -axis and y ¯ -axis. Dotted line shows the initialization of yaw axis gimbal for active correction.
Figure 23. Comparison of raw AprilTag data and proposed generalize sensor model based particle filter output along x ¯ -axis and y ¯ -axis. Dotted line shows the initialization of yaw axis gimbal for active correction.
Sensors 19 05480 g023
Table 1. Commonly used different fiducial markers with key features.
Table 1. Commonly used different fiducial markers with key features.
Tag NamesKey Features
ARToolkit [8]Use solid black outline for quick and robust detection.
Multi-ring Marker [9]Use color rings instead of black marker for more robust detection.
TRIP [10]Use a 2D circular mark for location identification.
ARTag [11]Robustness marker detection against different lightening conditions.
ARToolKitPlus [12]ARToolKit algorithm has been optimized for embedded devices.
Fourier-Tag [13]Use robust tag encoding scheme using the phase spectrum of a 1-D signal (gray-scale).
RUNE-Tag [14]Use perspective properties of circular dots for high accuracy and robustness.
CircularTag [15]Use circular nature and non-linear optimization to further increase accuracy.
AprilTag [6]Use stronger digital encoding, robust against different lighting conditions and occlusions.
Table 2. Measurement from Motion Capture (MoCap).
Table 2. Measurement from Motion Capture (MoCap).
Nominal Reference PointsMotion Capture (MoCap) Readings
x r (cm) y r (cm)N μ x ¯ (cm) μ y ¯ (cm) σ x ¯ 2 (cm 2 ) σ y ¯ 2 (cm 2 )
0302170.706230.3437 0.003110 0.003120
6301956.936729.5193 0.004890 0.000590
−630193−3.923030.1210 0.045800 0.003080
0502042.990250.0462 0.09500 0.014700
155020217.711849.6042 0.009490 0.002650
−1550205−12.046950.8171 0.000168 0.000242
0702173.268370.0810 0.002380 0.000267
207019923.368169.4768 0.000210 0.000197
−2070212−16.809770.9505 0.037100 0.001090
Table 3. Measurement stats with camera z-axis pointed towards the center of AprilTag.
Table 3. Measurement stats with camera z-axis pointed towards the center of AprilTag.
Nominal Reference PointsGround Truth (MoCap)AprilTag Readings
x r (cm) y r (cm) x ¯ (cm) y ¯ (cm)N μ x ¯ (cm) μ y ¯ (cm) σ x ¯ 2 (cm 2 ) σ y ¯ 2 (cm 2 )
0300.016630.020110−0.242030.1510 0.002000 0.000170
6307.016130.0101157.016129.8573 0.000040 0.000040
−630−5.979829.9280−5.979830.4293 0.000035 0.002080
0500.10249.9601070.757150.0819 0.000930 0.000002
155014.95250.6911316.914149.4206 0.000090 0.000030
−1550−14.9849.90103−16.918549.4014 0.000090 0.000034
0700.00370.051341.308070.0264 0.007390 0.000014
207020.0670.0614422.357469.0718 0.000310 0.000092
−2070−20.0170.02151−21.756069.5979 0.000240 0.000063
Table 4. Measurement stats with AprilTag center does not lie on the z-axis of the camera (changing camera yaw angle ‘ ϕ ’).
Table 4. Measurement stats with AprilTag center does not lie on the z-axis of the camera (changing camera yaw angle ‘ ϕ ’).
Nominal Reference PointsGround Truth (MoCap)AprilTag Readings
x r (cm) y r (cm) x ¯ (cm) y ¯ (cm)N μ x ¯ (cm) μ y ¯ (cm) σ x ¯ 2 (cm 2 ) σ y ¯ 2 (cm 2 )
0300.706230.3437152−1.116230.1501 9.47 0.16
6306.936729.51931825.459829.4149 14.0 0.56
−630−3.923030.1210162−6.409229.6787 14.0 1.07
0502.990250.0462133−1.350849.1515 76.0 1.51
155017.711849.604214413.888948.8854 84.0 6.87
−1550−12.046950.8171186−15.570948.3046 57.0 7.43
0703.268370.0810163−0.028568.3890 194.0 2.61
207023.368169.476814923.421467.4216 154.0 14.0
−2070−16.809770.9505140−24.443367.4337 112.0 14.0
Table 5. (Worst scenario) Measurement stats with fixed camera yaw angle ‘ ϕ ’ at 110 .
Table 5. (Worst scenario) Measurement stats with fixed camera yaw angle ‘ ϕ ’ at 110 .
Nominal Reference PointsGround Truth (MoCap)AprilTag Readings
x r (cm) y r (cm) ϕ (deg) x ¯ (cm) y ¯ (cm)N μ x ¯ (cm) μ y ¯ (cm) σ x ¯ 2 (cm 2 ) σ y ¯ 2 (cm 2 )
0301100.02030.001113−1.205730.5027 0.000230 0.000003
103011010.0730.01154−0.369430.7558 0.021100 0.003230
−1030110−9.6730.06178−0.579131.1269 0.000101 0.000021
0501100.10050.071147−0.820250.5234 0.001350 0.000004
155011014.96050.100120−1.937151.8464 0.000210 0.000046
−1550110−14.9149.97117−2.229052.4599 0.000165 0.000034
0701100.0370.011841.184170.4607 0.013600 0.000015
207011020.1069.981020.311371.4872 0.000750 0.000039
−2070110−20.0870.05128−1.861972.1167 0.000425 0.000018
Table 6. Table showing measurement stats after applying Soft Yaw Axis Correction (SYAC) on raw AprilTag data.
Table 6. Table showing measurement stats after applying Soft Yaw Axis Correction (SYAC) on raw AprilTag data.
Nominal Reference PointsGround Truth (MoCap)AprilTag Readings
x r (cm) y r (cm) x ¯ (cm) y ¯ (cm)N μ x ¯ (cm) μ y ¯ (cm) σ x ¯ 2 (cm 2 ) σ y ¯ 2 (cm 2 )
0300.706230.3437113−0.581330.6166 0.31 0.11
6306.936729.51931546.423229.7444 0.27 0.31
−630−3.923030.1210178−6.282630.5416 0.20 0.44
0502.990250.04621470.193051.3185 3.06 0.91
155017.711849.604212017.315049.5959 3.32 3.37
−1550−12.046950.8171117−16.273350.1972 1.12 5.04
0703.268370.08101841.855171.8400 30.0 3.37
207023.368169.476810224.882670.5867 12.0 13.0
−2070−16.809770.9505128−26.452269.1570 10.0 19.0
Table 7. Use of yaw axis gimbal on raw AprilTag system).
Table 7. Use of yaw axis gimbal on raw AprilTag system).
Nominal Reference PointsGround Truth (MoCap)AprilTag Readings
x r (cm) y r (cm) x ¯ (cm) y ¯ (cm)N μ x ¯ (cm) μ y ¯ (cm) σ x ¯ 2 (cm 2 ) σ y ¯ 2 (cm 2 )
0300.64231.006156−2.756930.3124 3.39 0.07
6306.7129.8201445.844830.0716 1.65 0.08
−630−5.8929.851126−6.639129.8201 2.99 0.19
0502.01750.02124−2.941249.8801 6.92 0.08
155016.9051.1311515.965249.6746 4.71 0.49
−1550−14.4249.63144−15.612749.4842 4.16 0.49
070−2.1068.90149−5.501770.0076 7.35 0.11
207022.7071.1615221.014669.8425 5.84 0.58
−2070−21.1071.23138−21.785869.7444 5.92 0.56
Table 8. Use of yaw axis gimbal with consistent frames (SYAC).
Table 8. Use of yaw axis gimbal with consistent frames (SYAC).
Nominal Reference PointsGround Truth (MoCap)AprilTag Readings
x r (cm) y r (cm) x ¯ (cm) y ¯ (cm)N μ x ¯ (cm) μ y ¯ (cm) σ x ¯ 2 (cm 2 ) σ y ¯ 2 (cm 2 )
0300.64231.006156−1.493230.6753 0.06 0.08
6306.7129.8201446.759529.7747 0.01 0.03
−630−5.8929.851126−5.564330.4668 0.05 0.17
0502.01750.02124−2.126250.1620 0.19 0.13
155016.9051.1311516.854149.2115 0.19 0.43
−1550−14.4249.63144−14.849150.1751 0.10 0.42
070−2.1068.90149−4.383170.2868 1.63 0.05
207022.7071.1615221.934969.3702 0.71 0.43
−2070−21.1071.23138−20.840370.5489 0.29 0.60
Table 9. Comparison of AprilTag against various proposed approaches.
Table 9. Comparison of AprilTag against various proposed approaches.
Ground-Truth (cm)Error in Mean ( μ ) Using Different Approaches for AprilTag. (cm)
Motion Capture System (MoCap)Raw AprilTag Readings (Camera Pointing Towards Tag’s Center)Raw AprilTag Readings (Camera Pointing Away from Tag’s Center)Applying Soft Yaw Angle Correction (SYAC) on Raw AprilTag ReadingsApplying Active Correction with Yaw Axis Gimbal on Raw AprilTag ReadingsApplying (SYAC + Active Yaw Axis Gimbal Correction) on Raw AprilTag Readings
x ¯ y ¯ | x ¯ μ x ¯ | | y ¯ μ y ¯ | | x ¯ μ x ¯ | | y ¯ μ y ¯ | | x ¯ μ x ¯ | | y ¯ μ y ¯ | | x ¯ μ x ¯ | | y ¯ μ y ¯ | | x ¯ μ x ¯ | | y ¯ μ y ¯ |
0.64231.000.8840.8551.7580.8551.2230.3893.39890.6932.1350.330
6.7129.820.30610.0371.2500.4050.2860.0750.86520.2510.0490.045
−5.8929.850.08980.5780.5190.1720.3920.6900.74910.0300.3250.615
2.01750.021.25990.06183.3670.8681.8241.2984.9580.1394.1430.141
16.9051.130.01411.7093.0112.2440.4151.5340.9341.4550.0451.918
−14.4249.632.49850.2281.1501.3251.8530.5671.1920.1450.4290.545
−2.1068.903.4081.1262.0710.5113.9552.9403.4011.1072.2831.386
22.7071.160.34262.0880.7213.7382.1820.5731.6851.3170.7651.789
−21.1071.230.65590.4323.3432.5965.3520.8730.6850.2850.2590.518
Table 10. Average execution time for different approaches.
Table 10. Average execution time for different approaches.
ApproachesAverage Execution Time Per Input Image
Raw AprilTag implementation90 ms
Passive yaw axis correction (SYAC)130 ms
Active Correction with yaw axis gimbal125 ms
(Passive yaw axis + Active gimbal) correction255 ms
Table 11. GP predicted distributions at unseen points against α = 0.01 , β = 20000
Table 11. GP predicted distributions at unseen points against α = 0.01 , β = 20000
Unknown PointsPredictive DistributionExperimental Distribution
( x ¯ , y ¯ , θ ¯ (cm,deg)) μ f * test (cm) σ f * test 2 (cm 2 ) μ x * (cm,deg) σ x * 2 (cm 2 )
(0,30,90)(0.4,30.9,95.3)( 3.9 × 10 8 , 2.2 × 10 6 , 2.6 × 10 6 )(0.1,30.6,89.77)( 6.1 × 10 4 , 1.4 × 10 4 , 1.4 × 10 8 )
(10,30,100)(11.9,30.9,92.2)( 1.3 × 10 5 , 2.2 × 10 6 , 8.6 × 10 6 )(10.4,29.5,70.57)( 3.7 × 10 5 , 4.7 × 10 5 , 4.7 × 10 9 )
(0,50,80)(0.4,49.9,88.56)( 3.9 × 10 8 , 3.0 × 10 6 , 2.2 × 10 6 )(1.0,51.8,88.84)( 7.5 × 10 4 , 1.0 × 10 5 , 1.0 × 10 9 )
(−15,50,100)(−16.01,49.9,92.2)( 3.3 × 10 5 , 3.0 × 10 6 , 8.6 × 10 6 )(−18.8,52.7,109.6)( 1.1 × 10 4 , 1.2 × 10 5 , 1.2 × 10 9 )
(20,70,100)(20.7,70.01,92.2)( 1.9 × 10 4 , 4.5 × 10 6 , 8.6 × 10 6 )(22.1,67.0,72.1)( 1.1 × 10 4 , 1.1 × 10 4 , 1.1 × 10 8 )
(−20,70,100)(−21.3,70.01,92.2)( 4.9 × 10 5 , 4.5 × 10 6 , 8.6 × 10 6 )(−26.1,78.0,109.8)( 1.8 × 10 4 , 2.4 × 10 5 , 2.4 × 10 9 )

Share and Cite

MDPI and ACS Style

Abbas, S.M.; Aslam, S.; Berns, K.; Muhammad, A. Analysis and Improvements in AprilTag Based State Estimation. Sensors 2019, 19, 5480. https://doi.org/10.3390/s19245480

AMA Style

Abbas SM, Aslam S, Berns K, Muhammad A. Analysis and Improvements in AprilTag Based State Estimation. Sensors. 2019; 19(24):5480. https://doi.org/10.3390/s19245480

Chicago/Turabian Style

Abbas, Syed Muhammad, Salman Aslam, Karsten Berns, and Abubakr Muhammad. 2019. "Analysis and Improvements in AprilTag Based State Estimation" Sensors 19, no. 24: 5480. https://doi.org/10.3390/s19245480

APA Style

Abbas, S. M., Aslam, S., Berns, K., & Muhammad, A. (2019). Analysis and Improvements in AprilTag Based State Estimation. Sensors, 19(24), 5480. https://doi.org/10.3390/s19245480

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop