The third contribution of this paper is the development of a forward probabilistic sensor model
for the AprilTag. The model is based on the collected measurement data and works for all the locations. These locations include both direct and indirect measurement points. Hence, it makes the empirical analysis of the current work applicable to a probabilistic decision-theoretic framework. With reference to
Figure 2 and
Figure 11, the true state
X of the robot is given by the tuple
. The measurement vector
Y is also a triplet
. The measurements are assumed to be a nonlinear transformation of the true state, corrupted by some additive sensor noise,
. Explicitly, these relations can be written as:
We are interested in finding the joint probability distribution of the measurement vector given the true states
. In order to find the above mentioned probability, Bayes’ theorem is applied to obtain:
where
J is a constant that can be factored out. Since we have no prior distribution
, one can use a Maximum Likelihood Estimator for a uniform prior, that is, a simple inversion of the model to deduce the states from the measurements by using Equations (
5)–(
7).
Hence, if we have a model
for
all states, we can use it to localize at even points where we do not have measurement data. We achieve this using a Gaussian Process (GP) based regression method [
44] as follows. First we make a simplifying assumption that
Y are not mutually correlated. While this may not be factually true, we find below that this is sufficient for using the Tag in practice. (The extension of the framework to correlated sensor measurements is a work in progress.) Therefore, we focus on either of the measurement variable in
Y as scalar nonlinear transformations
. These are precisely the individual measurement equations given above. Using the notation introduced in Reference [
44], we are interested in finding the distribution
, where
is a stochastic process for which
and
has a joint Gaussian distribution,
) is the unknown test point where the distribution has to be calculated,
are the ground truth points for training data
and
are the data collected in experiments as output of AprilTag at training points
.
The GP regression methodology makes the assumption that the training output
and test output
have a joint Gaussian distribution.(Once again, this is a simplifying assumption that may be invalid in practice but works in practice.)
where
is a N × N matrix defined by covariance function (kernel) evaluated at every training point
against each training point
.
is an
matrix defined by kernel evaluated at every test point
X against each test point
X.
is an
matrix formed by the kernel evaluated at every training point
against each test point
X. Further details can be seen in a standard references on GP (e.g., Reference [
44]).
Here, we are only interested in incorporating the knowledge provided by the training data
regarding distribution functions other than drawing random functions from prior knowledge. So we will restrict the joint prior distribution to contain only those functions which agrees with the observed data points
to get the posterior distribution over functions. In other words, we will reject all those functions generated from prior that disagrees with the observations. In probabilistic terms, this can be achieved by marginalizing the observations over the joint distribution to get the predicted distribution as
, where
where
is the noise variance for the particular AprilTag measurement variable under consideration. The process is repeated for all three measurement variables to regress the distribution for all measurement-state pairs. The results of the regression have been summarized in
Table 11.
Experimental Verification of Sensor Model
To verify the validity of our proposed AprilTag sensor model, we have used our sensor model in various settings to estimate the state of a robot. We assume a standard odometry model in which robot can rotate around its axis and can move forward [
45]. We have performed both indoor and outdoor experiments to validate our proposed sensor model.
For indoor experiment, at any time step t state vector is given by , where shows the movement of robot in x-axis, show is the movement in y-axis and shows the rotation around robot‘s own axis. Our goal is to find where is the robot state in previous time stamp, is the current input command and is the current sensor measurement.
We have used Monte Carlo simulation technique [
46] to estimate the position and pose of the robot since it does not require any prior knowledge for data distribution. In this method,
k number of particles are randomly generated around an initial starting point
with certain initial uncertainty based upon system
where
are the randomly generated particles and
,
is the initial value for x-axis,
is the initial value for y-axis,
us the initial angle and
are the initial variance in
x-axis,
y-axis and
respectively. Then, each particle is propagated forward based upon the motion model assumed
where
is a function representing motion model of the system and
n is the Gaussian noise. Then observation model is applied on each propagated particle to get observation measurements as
. Then these observation measurements are weighted against the measurement data from the sensor
. Each particle is assigned a probabilistic weight based upon how close it is to the measurement after applying observation model as
where
represents number of particles,
R is a 3 × 3 co-variance matrix. Then the assigned probability weights are normalized such that their sum is equal to 1 as
where
is the cumulative distribution of the probability density of weighted vector
. Then weighted particles are re-sampled for the next step by uniformally sampling from the cumulative distribution as shown in Equation (
19). Since the particles are being selected by statistical probabilities, so on average, particles with the greater weights are being selected.
After getting new particles, the whole process is repeated for times where is the total number of tag observations observed in an experiment. At every step, the average of all the particles is considered to be the true position of the robot. This algorithm relies on the survival of the fittest philosophy. Those particle which are close to the sensor measurement are weighted higher then others give them the chance to be selected again for the next round.
In our experiment, an incremental motion model has been used for the propagation of particles from one configuration
to another configuration
. Three parameters
,
and
have been used to encode the complete motion from one configuration to another. Input command
maps as rotation
of robot at initial configuration
such that it faces final configuration
.
maps as the straight forward motion
from initial configuration
to final configuration
and
maps the final rotation
at destination point for final pose angle. See
Figure 19 which shows each parameter in detail.
For proposed sensor model verification using Monte Carlo simulation, we used
particles initially generated at a known starting point
with initial variance of
. As it is assumed earlier, our robot can only move forward and can rotate around its own axis, so motion model for each particle can be given by
where
is the forward distance as a result input command
,
is the angle of rotation at initial position as a result of input command
,
is the rotation angle at final configuration point as a result of input command
.
is the Gaussian noise in x-axis,
is the Gaussian noise in y-axis and
is the Gaussian noise in
.
At any time
t, measurement vector is given by
. In this experiment, we have assumed that
x,
y and
are independent in nature. So for observation model, the proposed AprilTag sensor model as shown in Equation (
14) has been used.
Figure 20 shows the trajectory generated by applying the particle filter empowered with our proposed AprilTag sensor model in comparison with the ground truth generated by MoCap. AprilTag‘s center is placed at
over a calibrated setup and robot is moved in front of AprilTag in a rectangular shape. The rectangular shape is selected to have a better visualization of trajectory data and to see the loop closure.
Figure 20 shows that the trajectory generated by the particle filter (red) is very close to the ground truth trajectory (green). The experiment shows that the particles converge very quickly because of the high precision of the system achieved by applying proposed techniques.
To further investigate the performance of our proposed sensor model, a similar experiment in a large outdoor environment has been performed. For this purpose, a larger AprilTag of size
cm fixed on the ground has been used, as shown in
Figure 21. In this experiment, the robot has moved along an irregular path from the left side of the AprilTag to the right side as far as the tag is visually detectable and then back to the left side towards starting position, as shown in
Figure 22. To show the significance of each proposed improvement, we have divided the experiment into two phases. In phase one, active tracking of AprilTag is not activated and only passive correction is done using the sensor model (red path). In phase two, active tag tracking is also activated along with the passive correction (blue path), as shown in
Figure 22. Considering the experiment is in an outdoor environment, therefore ground truth trajectory can not be generated. Hence for ground truth verification, we have manually marked three validation points in meters that is,
,
and
before the experiment and have deliberately passed through them.
Figure 22 shows that the trajectory passes through the validation points.
Moreover previously proposed pose-indexed probabilistic sensor model in Equation (
14) is regressed over an indoor small scale experimental data. The training points are at the maximum of 1 m from the AprilTag. Therefore, the model trained by using Gaussian Processes(GP)
14 is only valid for sub-meter trajectories. To make it workable in long distances, we have proposed a general sensor model with some scale factor
d, where
d is the distance of the camera from the tag along the
-axis. To calculate the scale factor
d, we have used the equality as shown in Equation (
21).
where
f is focal length of camera in mm,
is the real height of the AprilTag in mm,
is the height of image sensor in pixels,
is the AprilTag height in pixels and
is the image sensor‘s height in mm.
c is a constant to change the unit scale. Since for outdoor experiment, wehave used meters as our unit of choice for distances, so we have used
. After evaluating scale factor
d, our general sensor model would become:
Here and is the mean value and variance for test point X using generalized sensor model respectively. is the generalized kernel, is the generalized training point and is the generalized observed value. We have empirically tested and verified experimentally that the generalized sensor model gives almost same result at certain distance ’d’.
Figure 22 shows the trajectory generated by our generalized sensor model in an outdoor environment by using Monte Carlo Simulation.
Figure 23 shows the axis-wise plot of raw AprilTag data (red) and the particle filter output (blue). It shows the filter is filtering the noise and improving the overall performance.