Next Article in Journal
Turbulence Modeling of Flows with Extensive Crossflow Separation
Previous Article in Journal
Graphene/Epoxy Coating as Multifunctional Material for Aircraft Structures
Previous Article in Special Issue
Transmit Energy Efficiency of Two Cognitive Radar Platforms for Target Identification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Position Estimation Using the Image Derivative †

by
Daniele Mortari
1,*,
Francesco De Dilectis
2 and
Renato Zanetti
3
1
Aerospace Engineering, Texas A&M University, College Station, TX 77843, USA
2
Aerospace Engineering and Engineering Mechanics, The University of Texas at Austin, Austin, TX 78712, USA
3
NASA Johnson Space Center, Houston, TX 77058, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 25th AAS/AIAA Space Flight Mechanics Meeting, Williamsburg, VA, USA, 11–15 January 2015 .
Aerospace 2015, 2(3), 435-460; https://doi.org/10.3390/aerospace2030435
Submission received: 19 March 2015 / Revised: 18 June 2015 / Accepted: 18 June 2015 / Published: 3 July 2015
(This article belongs to the Special Issue Driving Forward Aerospace Innovation)

Abstract

:
This article describes an image processing algorithm to identify the size and shape of a spherical reflecting celestial body prominently depicted in images taken from a spacecraft with an optical camera, with the purpose of estimating the relative distance between target and observer in magnitude and direction. The approach is based on the fact that in such images, the pixels belonging to the target’s hard edge have the highest values of the image derivative; therefore, they are easily recognizable when the image is processed with a gradient filter. Eventual extraneous points polluting the dataset (outliers) are eliminated by two methods applied in sequence. The target center and radius are estimated by non-linear least squares using circular sigmoid functions. The proposed image processing has been applied to real and synthetic Moon images. An error analysis is also performed to determine the performance of the proposed method.

Graphical Abstract

1. Introduction

Autonomous positioning has been a topic of research for many years [1], but its implementation has proven more complicated than autonomous attitude determination. As missions become more complex, it is desirable for a spacecraft to be able to estimate its position independently from ground, if not all of the time, at least as a back up in case of a loss of communication, either accidental or foreseen. In particular, spacecraft orbiting or en route to a satellite or a planet other than the Earth are periodically out of line-of-sight from any ground station and, thus, have to rely on their on-board capabilities. Utilizing pre-existing sensors to obtain a navigation solution is an attractive solution to the backup autonomous navigation problem. Among those, optical cameras have been a mainstay of spacecraft sensor arrays since the beginning of space exploration. The availability of onboard cameras, together with the advancement of the fields of image processing and feature identification, has led to the proposed approach of an algorithm for autonomous spacecraft positioning via analysis of optical images of a reflective celestial body in the proximity of the spacecraft itself.
The first optical navigation approaches used cameras to take pictures of target illuminated bodies against a background of cataloged reference stars. This technique was first proposed in the 1960s (see [2]). This approach was then adopted in six Voyager flybys of the outer planets [3,4] and Cassini’s orbital operations at Saturn [5]. In particular, for the Voyager II Uranus approach and encounter, [3,4] presented the optical observables. These observables were based on measuring the angular distances between stars and the observed body (considered as a circle) and require the use of two cameras. The approach proposed in this paper uses a single camera and does not use observed stars. The main reason is the optimal observation of stars and illuminated bodies is associated with different exposures (integration time). To observe stars and bodies, two cameras are needed.
The work in [6] adopts the Canny edge detection algorithm to identify the Moon edges. Then, the potential false edges are removed, and the limb profile is identified by the Levenberg–Marquardt ellipse least-squares algorithm. The line-of-sight (LOS) vector to the centroid of the observed object is then obtained. The approach taken in this study is different and optimized for use in small on-board computers. More accurate edge detection algorithms and optimization techniques are certainly available in the literature. Many alternative approaches to better select contours (hard edge pixels) do exist. Laplacian, Sobel, Roberts and Prewitt operators, as well as Gaussian derivatives and Gabor filters are certainly more accurate (while computationally more intensive) than the proposed simple numerical gradient, and an extensive literature exists on the comparison subject. Pushing the accuracy to select better pixels has been analyzed, and it has been found unnecessary, because the final accuracy (what really matters) is provided by the results of nonlinear least-squares estimation using circular sigmoid functions. To avoid computationally-intensive approaches, simple image differentiation is here adopted to select the pixels of interest. Using these pixels, accurate Moon radius and centroid are estimated by a fast nonlinear least-squares using circular sigmoid functions.
The works in [7,8] contain a thorough comparison analysis between various different mathematical approaches to perform the best fitting of circles and ellipses. While this study deals with the same problem, it does not follow the above approach for the following two reasons: (1) the image-plane ellipse eccentricity of the observed Moon is really small (almost negligible) for a typical imager; and (2) when the ellipse eccentricity is so small (as in our physical example), the least-squares estimation of the ellipse orientation (and the difference of the semi-axes) is unreliable (unobservable parameters). In this case, circle least-squares estimation is numerically more stable.
The pin-hole camera model performs gnomonic projection, and this projection introduces radial distortion. This means that the spherical Moon is observed as an ellipse. However, using the 1944 × 2592 Aptina imager with a 0.0022-mm pixel size and 16-mm focal length (this allows one to observe the Moon for an entire cislunar trajectory), the worst radial distortion (the Moon is observed as far as possible from the optical axis) is less than one pixel. These (really small) radius and centroid corrections (due to this small radial distortion) are easy to implement. This correction depends on the specific imager adopted. The elliptical best fitting approach is discouraging because (for such small eccentricity) the ellipse orientation becomes unobservable. Analytical studies and extensive numerical tests are done to support the proposed approach.
This work refines a previous study [9], focusing on the analysis of Moon images, both real and synthetic. However, the basic idea is applicable to any spherical reflective celestial body within relative proximity to the spacecraft. In this study, the Moon radius and centroid is estimated in two steps. First, Taubin’s technique [10] has been adopted just to obtain a crude first estimation. This first estimation is then used to select the set of pixels between two radii across the Moon’s hard edge, and it is used (as the initial guess) for the subsequent accurate iterative nonlinear least-squares estimation approach using circular sigmoid functions. Circular sigmoid functions accurately describe the actual gray tone smooth transition at the hard edge. During this nonlinear least-squares, the smoothing parameter (k) is also estimated.
The image processing approach is described in detail in the following sections. Further, several application examples are illustrated, together with an error analysis.

2. Concept

The starting point of the method is the realization that in an image taken by a spacecraft of a reflective celestial body, if the illumination conditions are right and the pointing appropriate, the target will figure prominently in the image, against a mostly black background. Therefore, processing the image with a gradient filter, pixels belonging to the hard edge have the highest magnitudes, which makes them easily identifiable. These points can then be best fitted to estimate the apparent target radius and center and, thus, the relative distance between observer and target. Several steps need to be taken to ensure that the data points are properly selected and that the best fitting is accurate enough. The proposed algorithm can be summarized as follows:
  • The geometrical illumination conditions and the expected size of the observed body are first computed. These data allow one to determine whether the target is sufficiently illuminated and of sufficient size to perform the estimation process.
  • The image differentiation is obtained with a fourth order Richardson extrapolation.
  • The pixels of the gradient image are sorted in descending mode. Eventual outliers present in this set, due to noise or background stars, are then eliminated by two sequential filters.
    The first filter validates an edge pixel based on eigenanalysis of a small gradient box around the pixel and by investigation of the same box in the original image.
    The second filter is a modified version of the RANSAC algorithm.
  • The edge points selected are then used to obtain the first estimate of center and radius. The Taubin best fitting algorithm [10] is used.
  • The first estimate is refined via non-linear least-squares using a circular sigmoid function to describe the edge transition.
Required inputs to the image processing software are (see Figure 1):
  • Camera: This includes focal length (f) in mm and imager data (number and size of rows and columns).
  • Image: This is the image (made of N r rows and N c columns) taken at the specified time.
  • Time: This is the time instant at which the image is taken. This time is used to compute the Moon and Sun positions in J2000.
  • Position: This input consists of a raw estimate of the spacecraft position, E { r } , and its accuracy, σ r . These data are used to estimate the Moon’s radius on the imager and the expected illumination condition. This position estimate is very rough, having typical an accuracy of less than 10,000 km.
  • Attitude: This input consists of the spacecraft attitude, E { q } , and attitude accuracy, σ q . These data are used to compute the spacecraft-to-Moon vector in the J2000 reference frame.
Figure 1 illustrates the algorithm.
Figure 1. Image processing general flowchart.
Figure 1. Image processing general flowchart.
Aerospace 02 00435 g001
Input of the proposed approach are: (1) the time, (2) the camera focal length (f), (3) the pixel size (d), (4) the N r × N c image, and (5) an rough estimation of the position ( E { r 0 } ) with it’s accuracy ( σ r ). From time the inertial positions of Moon ( r m ) and Sun ( r s ) can be computed. Using the knowledge of the camera parameters and E { r 0 } it is possible to derive important internal parameters, such as the expected Moon radius (in pixel) on the imager and the observed body illumination conditions. If the image is synthetic (e.g., generated by EDGE), then a simple convolution filter will smooth the image from pixellation. The image gradient is then generated using numerical differentiation and the N highest derivatives are selected (most of them belong to the Moon edge while a few are outliers due to the Moon topology). The outliers are then eliminated by two sequential filters, a box-based one followed by RANSAC. Once the outliers are eliminated a preliminary estimation of the Moon center and radius can be performed by least-squares using Taubin approach. Using this initial estimate, a set of pixels across the estimated Moon edge and angularly close to the Sun direction in the imager, are identified. These pixels are then used to obtain an accurate estimation of the Moon center and radius using circular sigmoid function as reference model. This allows us to estimate the vector to the Moon in the camera reference frame ( r ^ 0 b ). Finally, using the camera attitude ( q ^ 0 ) is then possible to compute the vector to the Moon in the inertial reference frame ( r ^ 0 J 2000 ).
The first check on the data is done using the rough expected value for the parameters, and the following section describes these calculations done for the Moon; however, they are applicable to any (almost) spherical celestial body. The following is based on geometric considerations alone and, thus, does not involve processing of the image data. This is why the parameters are described as “expected” values.

3. Expected Radius and Illumination Parameter

As the relative positions of observer, target and light source change, the visible part of the target varies. These changes can lead to cases in which the target is too thin or completely dark from the point of view of the observer, and therefore, no analysis can take place. The algorithm catches these degenerate cases before they can cause critical errors further down in the process. Two parameters are introduced to describe the target’s shape (from now on, for simplicity, the term “target” will refer just to the illuminated portion rather than the whole object, as this is what the camera is actually able to observe): one is the expected radius of the target p r , and the other is the cross-length measured at its thickest point p i (see Figure 2), which is referred to as the illumination parameter. Both of these distances are measured in pixels, and they are estimated from the camera parameters (focal length f, pixel size d) and the observation geometry. For this purpose, the observer’s position is needed; however, even a high uncertainty ( σ D 10,000 km) has a very small impact on p r and p i , as long as the spacecraft is further from the target than a certain threshold. Such a rough position estimate can be considered known a priori to start the analysis.
Figure 2. Expected Moon radius ( p r ) and illumination parameter ( p i ).
Figure 2. Expected Moon radius ( p r ) and illumination parameter ( p i ).
Aerospace 02 00435 g002
To begin, the expected radius and illumination angle are computed as:
p r = f d · R m | r o | 2 - R m 2 (pixels)
ϑ = arccos ( r ^ s · r ^ o ) .
where r ^ s is the Sun-to-Moon rays’ direction, r o the observer-to-Moon vector and R m the Moon radius. The expected radius is of great importance, not only because the illumination parameter is measured in relation to it, but also because it predicts the overall size of the target in the image, which is used in subsequent parts of the algorithm. The value of p i is determined differently depending on the values of p r and ϑ.

3.1. Full or New Moon

With reference to Figure 3, the angle α m can be derived from simple trigonometry:
| r o | sin α m = R m .
This equation also allows one to define a critical distance, r c r , as the distance at which α m = ϑ :
r c r sin ϑ = R m
Whether | r o | is greater or smaller than r c r , together with the value of ϑ, determines the case, according to the following scheme:
if | r o | < r c r and ϑ < π / 2 then p i = 2 p r (Full Moon) ϑ > π / 2 then p i = 0 (New Moon)
Figure 3. Full/new Moon geometry.
Figure 3. Full/new Moon geometry.
Aerospace 02 00435 g003

3.2. Gibbous Moon

The gibbous Moon occurs when:
| r o | > r c r and ϑ < π 2 .
For this configuration (see Figure 4), p i can be computed by finding the auxiliary variables x and α e :
x 2 = | r o | 2 + R m 2 - 2 | r o | R m sin ϑ x
sin α e = R m cos ϑ x α e
which allows one to find the “excess” factor p e :
p e = p r tan α e tan α m
and therefore,
p i = p r + p e = 1 + tan α e tan α m p r
Figure 4. The gibbous Moon geometry.
Figure 4. The gibbous Moon geometry.
Aerospace 02 00435 g004

3.3. Crescent Moon

The crescent Moon occurs when:
| r o | > r c r and ϑ > π 2
In this case (see Figure 5), the two variables x and α e can be found according to:
x 2 = | r o | 2 + R m 2 - 2 | r o | R m sin ϑ x
sin α e = - cos ϑ x R m α e
and therefore:
p i = p r - p e = 1 - tan α e tan α m p r
Figure 5. The crescent Moon geometry.
Figure 5. The crescent Moon geometry.
Aerospace 02 00435 g005
Differentiating among these cases is important to properly initialize the algorithm. For this purpose, the analytical relations described above are supplemented with thresholds to ensure that the system works properly; for instance, when p i is lower than a certain value, the crescent is deemed too thin to be a reliable source of information, and the target is considered invisible.

3.4. Special Case: Cropped Target

It is also possible for the target not to be completely within the field-of-view. The images’ boundaries are set with a rectangular mask proportional to the dimensions of the image itself. At each step, the cumulative brightness within the mask is computed and compared with a threshold value, for example a value tested to provide good performance is 1 / 10 of the maximum gray tone. From the amount of threshold crossings, it is possible to determine if and how many times the target has been cropped by the frame. An example application is depicted in Figure 6.
Figure 6. Processing of a cropped image. (a) Cropped image of the Moon; (b) Results of the cropped detection.
Figure 6. Processing of a cropped image. (a) Cropped image of the Moon; (b) Results of the cropped detection.
Aerospace 02 00435 g006
The values of p r and p i are used to initialize the following parameters:
  • Number of pixels in the gradient image to be analyzed (N): This number has to be proportional to the length of the visible Moon arc. However, at this stage, it is impossible to know the angular width of the target; moreover, the presence of the terminator introduces a number of high contrast pixels that are not part of the Moon edge. Ultimately, N is chosen to be proportional to p r and p i , according to the following:
    N = k π p r
    where k is a factor that varies linearly with p i , from a minimum value (two) when p i 10 to a maximum equal to twice the minimum, when p i 2 p r .
    k = 2 1 + p i - 10 2 p r - 10
  • The size of the box for box-based outliers removal (s): Because of the way the code works, the box has to be chosen small enough that the segment of the Moon edge within the box appears almost straight. Extensive testing has shown that the best choice is to take s = 0 . 8 p r , with a lower limit of s = 3 and an upper limit of s = 21 .
  • RANSAC maximum distance: This is chosen equal to d max = 5 p r / 100 .
  • Number of tests for RANSAC ( N T ): RANSAC randomly selects for testing a subset of all of the possible triplets of pixels remaining after the box-based outliers’ removal. Testing has shown that the best choice is to have:
    N T = 1 5 N 1 3 = N 1 ( N 1 - 1 ) ( N 1 - 2 ) 30
    where N 1 is the number of selected pixels remaining after the box-based outliers’ filter rejection. The value of 500 is enforced as an upper limit.
  • Convergence tolerance for circular sigmoid function (CSF) least squares [9]: At the k-th iteration, the convergence check is performed on the solution variation Δ X k = | Δ r k 2 + Δ c k 2 - Δ R k 2 | . Assuming the requirement accuracy is 0.3 pixel (in the center and radius estimation), the CSF-LS converge criteria is assumed to be 1/50 smaller than that, Δ X min = 0 . 3 / 50 .
  • The number of iterations for CSF least squares is N i t e r = 50 .
As indicated in Figure 1, several checks are implemented throughout the code. Failure to pass any of the tests causes the process to abort.
  • Check 1 (after internal parameter computation):
    • The Moon is too small (apparent diameter less than a set number of pixels) or too big (the Moon is completely or almost completely filling the image).
    • The Moon is new (or otherwise, its illuminated fraction is negligible) or out of frame altogether.
  • Check 2 (after box-based outlier removal; see Section 4.2):
    Too many outliers have been identified, leaving the pool of valid data below a set limit.
  • Check 3 (after RANSAC-based outlier removal; see Section 4.3):
    Too many outliers have been identified, leaving the pool of valid data below a set limit.
    The estimated radii found by RANSAC are not consistent with the expected apparent radius previously found.
  • Check 4 (after Taubin; see Section 5.1):
    The radius found is not consistent with the expected value.
    The computed center position of the Moon is out of the imager and too far away form the optical axis.
  • Check 5 (inside and after CSF-LS):
    The maximum number of iterations is exceeded.
    The estimates of the radius and center are too different from the results of Taubin.

4. Gradient-Based Image Processing Algorithm

The gradient-based image processing approach is based on the expectation that the part of the image with the strongest variation of gray tones is the Moon’s observable hard edge. Pixels with the highest derivatives are the most likely to belong to the hard edge. However, the presence of bright stars, Moon surface roughness and other unexpected objects within the field-of-view creates several bright gradient pixels, which are not in fact part of the Moon hard edge. These points, called outliers, must therefore be eliminated from the dataset prior to using a best fitting algorithm to determine the sought geometrical properties. In this work, two separate outlier removal methods are implemented in sequence.

4.1. Image Differentiation

The gradient of the discrete gray tone function is computed by numerical differentiation; in particular, the gradient is computed using four-point central difference with a single Richardson extrapolation.
The row and column four-point central partial differences computed at pixel [ r , c ] are:
g r ( r , c ) = 8 [ I ( r + 1 , c ) - I ( r - 1 , c ) ] - [ I ( r + 2 , c ) - I ( r - 2 , c ) ] 12 h g c ( r , c ) = 8 [ I ( r , c + 1 ) - I ( r , c - 1 ) ] - [ I ( r , c + 2 ) - I ( r , c - 2 ) ] 12 h
These derivatives are accurate with order h 4 , where h is the pixel dimension. The image gradient is then computed as:
g ( r , c ) = g r 2 ( r , c ) + g c 2 ( r , c )
A single Richardson extrapolation requires computing Equation (19) one time with step 2 h (getting g 2 h ) and one time with step h (getting g h ). Therefore, the true derivative can be written as:
g | t r u e = g 2 h + c ( 2 h ) 4 g | t r u e = g h + c h 4
Equating these two equations, we obtain:
g h - g 2 h = c h 4 ( 2 4 - 1 ) c h 4 = 1 2 4 - 1 g h - g 2 h
and therefore:
g | t r u e = g h + 1 2 4 - 1 g h - g 2 h
An example of the application of a single Richardson extrapolation is done using the six-hump camel back function:
f ( x , y ) = x 2 4 4 - 21 40 x 2 + x 4 8 + x y 4 + y 2 y 2 4 - 1
Using a single Richardson extrapolation a 4–5 orders of magnitude accuracy gain can be obtained. In this specific example, central two points and central four-points provide 10 - 3 and 10 - 8 level accuracy, respectively. However, when using Richardson extrapolation, central two-point and central four-point provide 10 - 8 and 10 - 13 level accuracy, respectively. Because it provides almost the machine error accuracy, the four-point central differentiation with Richardson extrapolation is selected to evaluate the image gradient.
The image gradient pixels are sorted by gradient value in descending mode, and the outliers are then eliminated with two methods discussed below.

4.2. Box-Based Outlier Identification

The first approach is based on a local analysis around the pixel being considered as potentially belonging to the target’s edge. A mask like the one in Figure 7, whose size depends on the parameter p r , is centered on a candidate pixel. If the pixel does indeed belong to the target’s edge, then it is likely to be part of a sequence of pixels of comparable brightness distributed along a preferred direction. Moreover, the pixel should sit on the border between two areas where the gray tone “flattens” (the inside and outside of the target) and which therefore should appear as dark in the gradient image. If the pixel does not satisfy all of these conditions, it is rejected. To perform this analysis, first, the tensor of inertia of the gradient image within the mask is computed, and then, the eigenvectors and eigenvalues are found. The eigenvector associated with the minimum eigenvalue (continuous line direction in Figure 7) is inclined by an angle γ with respect to the row axis direction and represents the direction of the supposed edge.If γ mod π < π / 2 , then the farthest pixels from the edge line are those marked by “D”, while if γ mod π > π / 2 , the farthest pixels are those marked by “U”. Several conditions have to be satisfied for the pixel to be accepted:
  • The ratio between the eigenvalues must be lower than a certain threshold, chosen experimentally at 0 . 3 . This indicates that the distribution of bright pixels has a preferential direction.
  • The values of the gradient in the pixels furthest from the edge line should be small, as both should be far from areas of variable brightness. Moreover, their gray tones should differ noticeably, as one belongs to the target and the other to the night sky.
  • For the pixels along the continuous line in Figure 7, the ratio of their gradient values should be close to one ( g min / g max > 0 . 6 ), as they both supposedly belong to the edge.
  • For the pixels marked by “D” in the same figure, the ratio of their gray tone values should be close to zero ( g min / g max < 0 . 3 ), as one belongs to the night sky and the other to the target.
Figure 7. Mask for the box-based outlier identification technique (mask size: 7 × 7 ).
Figure 7. Mask for the box-based outlier identification technique (mask size: 7 × 7 ).
Aerospace 02 00435 g007

4.3. RANSAC for Circle and Ellipse

The second technique is the standard RANSAC algorithm, which is an abbreviation for “random sample consensus” [11]. It was developed for robust fitting of models in the presence of many data outliers. The algorithm is very simple and can be applied in the fitting of many geometric entities, such as lines, circles, ellipses, etc. It is a non-deterministic iterative algorithm, in the sense that it produces a reasonable result only with a given probability, which increases as more iterations are allowed. It works by selecting random subsets of size equal to the number of unknown parameters and then checks whether the remaining data points lie further than a certain threshold from the curve determined by the initial subset. Repeating this process for many randomly-chosen subsets is likely to find and remove the outliers. More formally, given a fitting problem with n data points and unknown m parameters, starting with N min = n , the classic RANSAC algorithm perform N test times the following steps:
  • selects m data points at random;
  • finds the curve passing through these m points, thus estimating the associated m parameters;
  • counts how many data items are located at a distance greater than a minimum value d min from the curve; they are N k ;
  • if N k < N min , then set N min = N k .
The basic cycle is depicted in Figure 8.
Crucial to a successful implementation of RANSAC is the method used for the random selection of subsets. A purely random selection approach may persist in choosing outliers, with a great waste of computational time. On the other hand, a procedure based on deterministic inner loops can also be inefficient, if by chance, one outlier is selected by the outmost loop. Therefore, as an improvement of the basic method, a technique originally developed for pyramid star identification [12] has been adopted, which always selects different subsets by continuously rotating the dataset indexes.
Figure 8. RANSAC flowchart.
Figure 8. RANSAC flowchart.
Aerospace 02 00435 g008

5. Circle Best Fitting

Once the pixels in the image have been selected, the algorithm determines the geometric characteristics of the observed body, that is apparent radius and center position within the image. The problem of recognizing geometrical entities and reducing them to a minimal set of parameters, sometimes called feature extraction, is very important in image processing for a number of applications in computer graphics and computer vision. Consequently, a great amount of research exists on developing methods to efficiently and unequivocally recognize geometrical shapes within an image; in particular, much literature is available regarding circle identification. Identifying geometrical figures within a discrete representation of the physical world reduces essentially to a best fitting problem. Currently, feature recognition techniques for conic curves fall within one of the following categories:
  • Geometric fit: Given a set of n data points, iteratively converge to the minimum of:
    L g = i = 1 n d i 2 ( x )
    where d i are geometric distances from the data points to the conic and x is the vector of the parameters to be found.
  • Algebraic fit: d i are replaced with some other function of the parameters f i , and thus, a new cost function is defined:
    L a = i = 1 n f i 2 ( x )
These functions typically are algebraic expression of the conic in question, which are usually simpler than the expression for geometric distances.
Historically, geometric fit has been regarded as being the more accurate of the two. However, this accuracy comes at the price of added computational cost and the possibility of divergence. Algebraic fit is typically reliable, simple and fast. A common practice is to use an algebraic fit to generate an initial guess for a subsequent iterative geometric fit, but in recent years, new algebraic fitting routines have shown such a degree of accuracy that geometric fits cannot improve it noticeably [13].

5.1. Taubin Best Fitting

The following is a description of the Taubin best fitting method, first described in [10], but using a formalism found in [13]. This algorithm is used to obtain an initial estimate of the sigmoid best fitting algorithm. Given the circle implicit equation as:
a ( x 2 + y 2 ) + b x + c y + d = a z + b x + c y + d = 0
the Taubin best fit minimizes the function:
F T ( a , b , c , d ) = n i ( a z i + b x i + c y i + d ) 2 i ( 4 a 2 z i + 4 a b x i + 4 a c y i + b 2 + c 2 )
which is equivalent to minimizing:
F T ( a , b , c , d ) = i ( a z i + b x i + c y i + d ) 2
subject to the constraint:
4 a 2 z ¯ + 4 a b x ¯ + 4 a c y ¯ + b 2 + c 2 = 1
where the bar sign indicates the simple average of the variables.
x ¯ = 1 n i = 1 n x i , y ¯ = 1 n i = 1 n y i , and z ¯ = 1 n i = 1 n ( x i 2 + y i 2 ) = 1 n i = 1 n z i .
One important property of this method is that the results are independent of the reference frame. By reformulating this problem in matrix form, it becomes apparent how it is equivalent to a generalized eigenvalue problem. Indeed, define:
A = a b c d , Z = z 1 x 1 y 1 1 z n x n y n 1 , and M = 1 n Z Z ̰ = z ¯ z ¯ z ¯ x ¯ z ¯ y ¯ z ¯ z ¯ x ¯ x ¯ x ¯ x ¯ y ¯ x ¯ z ¯ y ¯ x ¯ y ¯ y ¯ y ¯ y ¯ z ¯ x ¯ y ¯ 1 ,
then the best fitting problem can be rewritten as:
A T M A = 0 subject to A T N A = 1 ,
where:
N = 4 z ¯ 2 x ¯ 2 y ¯ 0 2 z ¯ 1 0 0 2 y ¯ 0 1 0 0 0 0 0
Therefore, the optimization problem is equivalent to the following generalized eigenvalue problem:
M A = η N A
Since, for each solution, [ A , η ] :
A T M A = η A T N A = η
then, to minimize A T M A , the smallest positive eigenvalue is chosen. Because matrix N has rank three, such an eigenvalue is found by explicitly solving the cubic characteristic equation without the necessity of implementing SVD or other computationally-intensive algorithms.

6. Numerical Tests

To support the capability of the developed image processing code, three numerical examples of a partially-observed Moon are presented here. All of these tests have been performed with no information about time, camera data and the observer, Moon and Sun positions. The image derivative is obtained using the four-point central approach with no Richardson extrapolation. This information clearly helps to define the best values of the internal parameters. Since this help is missing, the internal parameters have been adopted with the same values in all three tests. These values are:
  • Maximum number of highest derivatives selected: N = 1 , 500 ;
  • Box-based outlier elimination: box size = 21 × 21 ;
  • Box-based outlier elimination: derivative ratio > 0 . 6 ;
  • Box-based outlier elimination: corner ratio < 0 . 3 ;
  • Box-based outlier elimination: eigenvalue ratio < 0 . 3 ;
  • RANSAC: minimum distance d min = 4 pixels;
  • RANSAC: number of tests = 100.

6.1. Example 1: Real Image, Four-Times Cropped

The original image of Example 1, shown in Figure 9, was intentionally cropped from a real image (taken from Earth) to validate the capability of the software to face multiple cropped Moon images. Being cropped four times is also the maximum number of cropped segments possible. The four figures of Figure 9 show the four main phases of the image processing. Once the image differentiation has been computed, the first 1490 points with the highest derivatives have been selected. This number differs from N = 1500 , because the image is small, and the number of points selected is also a function of the total number of pixels ( N = 1500 is also the maximum number).
The box-based outliers elimination approach discarded 375 points, while RANSAC eliminated an additional five points. The resulting points are shown in the bottom right of Figure 10 using white markers.
The bottom right of Figure 10 shows the results obtained by the Taubin fitting approach to estimate the circle parameters (Moon center and radius). The maximum distance from the estimated circle (worst residual) of the selected points was 2.9 pixels. The Taubin estimation gives a Moon center located at row 185.56 and column 187.34 and radius 218.03. Using this estimate, the set of points shown by white markers in Figure 11 of the Moon edge have been selected. These points are then used by CSF-LS to perform the nonlinear least-square estimation of the Moon center and radius using the circular sigmoid function. The new estimate implies a variation of - 0 . 7 , - 0 . 5 and 0 . 4 pixels for row and column center coordinates and for radius, respectively.
Figure 9. Example 1: Original image.
Figure 9. Example 1: Original image.
Aerospace 02 00435 g009
Figure 10. Example 1: Image processing phases (data in pixel). (a) 1490 highest derivatives; (b) 21 × 21 box approach removed 375 outliers; (c) RANSAC removed 5 outliers using 100 tests and d min = 4 ; (d) Body center inside image. Row center = 185.5991, Column center = 187.339, Radius = 218.0337 (worst residual = 2.9321).
Figure 10. Example 1: Image processing phases (data in pixel). (a) 1490 highest derivatives; (b) 21 × 21 box approach removed 375 outliers; (c) RANSAC removed 5 outliers using 100 tests and d min = 4 ; (d) Body center inside image. Row center = 185.5991, Column center = 187.339, Radius = 218.0337 (worst residual = 2.9321).
Aerospace 02 00435 g010
Figure 11. Example 1: Circular Sigmoid Function (CSF)-LS final estimation. Center (white mark) inside the image. Row center: 186.169 pixel ( Δ r = - 0 . 71781 ). Column center: 187.8695 pixel ( Δ c = - 0 . 53055 ). Radius: 217.6452 pixel ( Δ R = 0 . 38853 ).
Figure 11. Example 1: Circular Sigmoid Function (CSF)-LS final estimation. Center (white mark) inside the image. Row center: 186.169 pixel ( Δ r = - 0 . 71781 ). Column center: 187.8695 pixel ( Δ c = - 0 . 53055 ). Radius: 217.6452 pixel ( Δ R = 0 . 38853 ).
Aerospace 02 00435 g011

6.2. Example 2: Synthetic Image Cropped

In the second example, the test image is a synthetically-generated image. All of the relevant data and results are shown in Figure 12, Figure 13 and Figure 14.
Figure 12. Image convolution using a 3 × 3 mask. (a): original image; (b) convoluted image.
Figure 12. Image convolution using a 3 × 3 mask. (a): original image; (b) convoluted image.
Aerospace 02 00435 g012
Figure 13. Example 2: Image processing phases (data in pixel). (a) 1500 highest derivatives; (b) 21 × 21 box approach removed 911 outliers; (c) RANSAC removed 229 outliers using 100 tests and d min = 4 ; (d) Body center inside image. Row center = 189.0829, Column center = 437.5094, Radius = 89.0182 (worst residual = 4.9575).
Figure 13. Example 2: Image processing phases (data in pixel). (a) 1500 highest derivatives; (b) 21 × 21 box approach removed 911 outliers; (c) RANSAC removed 229 outliers using 100 tests and d min = 4 ; (d) Body center inside image. Row center = 189.0829, Column center = 437.5094, Radius = 89.0182 (worst residual = 4.9575).
Aerospace 02 00435 g013
Figure 14. Example 2: Circular Sigmoid Function (CSF)-LS final estimation (data in pixel). Center (white mark) inside the image. Row center: 189.2105 ( Δ r = - 0 . 12759 ). Column center: 437.1483 ( Δ c = 0 . 3611 ). Radius: 89.0247 ( Δ R = - 0 . 0064852 ).
Figure 14. Example 2: Circular Sigmoid Function (CSF)-LS final estimation (data in pixel). Center (white mark) inside the image. Row center: 189.2105 ( Δ r = - 0 . 12759 ). Column center: 437.1483 ( Δ c = 0 . 3611 ). Radius: 89.0247 ( Δ R = - 0 . 0064852 ).
Aerospace 02 00435 g014

6.3. Example 3: Real Image with Center Far Outside the Imager

All of the relevant data and results are shown in Figure 15, Figure 16 and Figure 17.
Figure 15. Example 3: Original image.
Figure 15. Example 3: Original image.
Aerospace 02 00435 g015
Figure 16. Example 3: Image processing phases (data in pixel). (a) 1500 highest derivatives; (b) 21 × 21 box approach removed 35 outliers; (c) RANSAC removed 0 outliers using 100 tests and d min = 4 ; (d) Body center outside image. Row center = 551.8926, Column center = −393.0706, Radius = 612.8029 (worst residual = 2.995).
Figure 16. Example 3: Image processing phases (data in pixel). (a) 1500 highest derivatives; (b) 21 × 21 box approach removed 35 outliers; (c) RANSAC removed 0 outliers using 100 tests and d min = 4 ; (d) Body center outside image. Row center = 551.8926, Column center = −393.0706, Radius = 612.8029 (worst residual = 2.995).
Aerospace 02 00435 g016
Figure 17. Example 3: CSF-LS final estimation (data in pixel). Center outside the image. Row center: 556.0114 ( Δ r = - 4 . 1188 ). Column center: −386.4042 ( Δ c = - 6 . 6663 ). Radius: 605.5698 ( Δ R = 7 . 233 ).
Figure 17. Example 3: CSF-LS final estimation (data in pixel). Center outside the image. Row center: 556.0114 ( Δ r = - 4 . 1188 ). Column center: −386.4042 ( Δ c = - 6 . 6663 ). Radius: 605.5698 ( Δ R = 7 . 233 ).
Aerospace 02 00435 g017

7. Error Analysis

This section is dedicated to the error analysis associated with the following sources:
  • Estimated position error: This analysis quantifies how the error in the measured Moon radius (in pixel) affects the spacecraft-to-Moon distance computation (in km).
  • Estimated attitude error: This analysis quantifies how the error in the camera attitude knowledge affects the spacecraft-to-Moon direction (unit-vector).
  • Image processing errors: This Monte Carlo analysis shows the distribution of the Moon radius and center estimated by the image processing software when the image is perturbed by Gaussian noise. This is done in the three cases of crescent, gibbous and full Moon illumination.

7.1. Position and Attitude Uncertainty Propagation

This section investigates the sensitivity of the Moon estimated radius to the position estimation error. It also investigates the error in the Moon center direction because of the attitude estimation error.
The Moon radius (in pixel) is directly related to the spacecraft-to-Moon distance. The spacecraft-to-Moon vector is r o m = r m - r , where r m and r are the Moon and spacecraft position vectors in J2000, respectively. Vector r m is simply derived from time, while the spacecraft position vector, r, can be considered a random vector variable with mean r ¯ and standard deviation σ r , that is r N ( r ¯ , σ r u ^ ) , where u ^ is a random unit-vector variable uniformly distributed in the unit-radius sphere. Neglecting the ephemeris errors, the uncertainty in the spacecraft vector position coincides with the uncertainty in the spacecraft-to-Moon distance, σ D = σ r .
The Moon radius in pixels in the imager is provided by:
r = f d R m D 2 - R m 2
where d is the pixel dimension (in the same unit of the focal length, f), R m = 1737.5 km is the Moon radius and D is the spacecraft-to-Moon distance. This equation allows us to derive the uncertainty in the Moon radius in the imager:
σ r = r D r ¯ σ D = f D R m d ( D 2 - R m 2 ) 3 / 2 σ r
The inverse of Equation (37):
σ D = d ( D 2 - R m 2 ) 3 / 2 f D R m σ r
provides important information about the accuracy required by the radius estimation. This equation tells you how the radius estimate accuracy is affecting the spacecraft-to-Moon vector length and, consequently, how this estimate is affecting the estimation of the spacecraft position.
Equation (38) has been plotted in Figure 18 as a function of the Moon radius error and for four values of the Moon distance, D = 50,000 km, D = 150,000 km, D = 250,000 km and D = 35,000 km.
Figure 18. Moon distance error vs. distance.
Figure 18. Moon distance error vs. distance.
Aerospace 02 00435 g018
The Moon direction uncertainty (in pixels) is caused by the uncertainty in the attitude knowledge. Consider the attitude q N ( q ¯ , 2 σ q u ^ ) with mean q ¯ and standard deviation 2 σ q . The displacement with respect to the camera optical axis is c = f d tan ϑ , where ϑ is the angle between the estimated Moon center and the camera optical axis. Therefore, the radial uncertainty in the Moon center direction is provided by:
σ c = f d 1 cos 2 ϑ q ¯ σ q

7.2. Image Processing Error

Image processing error quantification requires performing statistics using a substantial set of images with known true data. Since no such ideal database is currently available, the analysis of the resulting estimated parameters (Moon radius and centroid) dispersion is performed by adding Gaussian noise to the same image. The Moon illumination scenario plays here a key role, since when the Moon is fully illuminated, the number of data (pixels) used to perform the estimation will be twice as much as in the partial illumination case (under the same conditions: distance to the Moon and selected camera). For this reason, two scenarios have been considered:
  • Partially-illuminated Moon (results provided in Figure 19);
  • Fully-illuminated Moon (results provided in Figure 20).
Figure 19. Image processing error results: partial illumination case.
Figure 19. Image processing error results: partial illumination case.
Aerospace 02 00435 g019
Figure 20. Image processing error results: full illumination case.
Figure 20. Image processing error results: full illumination case.
Aerospace 02 00435 g020
The results of the tests done using partially illuminated Moon are shown in Figure 19. The image selected is real, and it was taken on 6 March 2013 at 05:24:30 using a focal length f = 100 mm. The statistics of the results obtained in 1000 tests by adding Gaussian noise with zero-mean value and standard deviation σ = 10 gray tone are given. The code estimates the Moon radius and center (row and column) by least-squares using the circular sigmoid function (CSF-LS). The results show the three parameters estimated as unbiased and with a maximum error of the order of 0.2 pixels (with respect to the original picture). The histogram of the number of iterations required by the least-squares to converge shows an average of eight iterations. The central-upper plot in Figure 19 shows the ellipsoid distribution of the Moon center error due to the fact that the estimation uses just data on one side of the Moon, on the illuminated part.
The results of the tests done using the fully illuminated Moon are shown in Figure 20. Furthermore, in this case, the image is real and was taken in Houston on 25 February 2013 at 22:33:00 CDT using a focal length f = 105 mm. The statistics of the results obtained in 1000 tests by adding Gaussian noise with zero-mean value and standard deviation σ = 10 gray tone are given. The code estimates the Moon radius and center (row and column) by least-squares using CSF-LS. The results show the three parameters estimated as unbiased and with a maximum error of the order of 0.01 pixels, one order magnitude being more accurate than what was obtained in the partial illuminated case. The histogram of the number of iterations required by the least-squares to converge shows an average of three iterations. The central-upper plot in Figure 19 shows the unbiased Gaussian distribution of the Moon center error. These more accurate results are due to the fact that the estimation uses data all around the Moon edge.

8. Conclusions

This paper presents an image processing algorithm for visual-based navigation capable of estimating the observer to Moon vector. This information is then used for autonomous onboard trajectory determination in cislunar space. The proposed method first identifies the Moon edge pixels by numerical differentiation using Richardson extrapolation. This creates gradient images where most of the brightest pixels belong to the Moon edge. However, some of these pixels are generated on the Moon surface (outliers) due to the combination of illumination and Moon topology (craters). Two sequential filters are then adopted to remove these unwanted outliers. The first filter is provided by a box-based method that discriminates the edge pixels based on a local analysis of both the original and the gradient images, while the second filter is a standard RANSAC approach for circles. Once a subset of pixels belonging to the Moon edge has been identified, they are then used to perform the first (crude) estimation of the Moon centroid and radius. This is done in closed-form using Taubin’s method to perform least-squares best fitting estimation of a circle. This first estimate is then used to apply a subsequent accurate iterative nonlinear least-squares approach to estimate the Moon center and radius using circular sigmoid functions. Circular sigmoid functions accurately describe the actual gray tone smooth transition at the hard edge. In particular, this transition is described by a smoothing parameter, which is also estimated. Preliminary analysis of the algorithm’s performance is studied via processing both real and synthetic lunar images.

Acknowledgments

The authors greatly thank Dr. Chris D’Souza and Steven Lockhart of NASA-JSC for their direct and indirect contributions to this study.

Author Contributions

Daniele Mortari conceived of and developed the original theory. Francesco de Dilectis validated the idea by developing the software. Renato Zanetti supervised in detail the theory and validation and improved the use of English in the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chorym, M.A.; Hoffman, D.P.; Major, C.S.; Spector, V.A. Autonomous navigation-where we are in 1984. In Guidance and Control; AIAA, Inc.: Keyston, CO, USA, 1984; pp. 27–37. [Google Scholar]
  2. Owen, W.; Duxbury, T.; Action, C.; Synnott, S.; Riedel, J.; Bhaskaran, S. A brief history of optical navigation at JPL. In Proceedings of the 31st AAS Rocky Mountain Guidance and Control Conference, Breckenridge, CO, USA, 1–6 February 2008.
  3. Synnott, S.; Donegan, A.; Riedel, J.; Stuve, J. Interplanetary optical navigation: Voyager uranus encounter. In Proceedings of the AIAA Astrodynamics Conference, Williamsburg, VA, USA, 18–20 August 1986.
  4. Riedel, J.E.; Owen, W.M., Jr.; Stuve, J.A.; Synnott, A.P.; Vaughan, R.A.M. Optical navigation during the voyager neptune encounter. In Proceedings of the AIAA/AAS Astrodynamics Conference, Portland, OR, 20–22 August 1990.
  5. Gillam, S.D.; Owen, W.M., Jr.; Vaughan, A.T.; Wang, T.-C.M.; Costello, J.D.; Jacobson, R.A.; Bluhm, D.; Pojman, J.L.; Ionasescu, R. Optical navigation for the Cassini/Huygens mission. In Proceedings of the AIAA/AAS Astrodynamics Specialist Conference, Mackinac Island, MI, USA, 19–23 August 2007.
  6. Li, S.; Lu, R.; Zhang, L.; Peng, Y. Image processing algorithms for deep-space autonomous optical navigation. J. Navig. 2013, 66, 605–623. [Google Scholar] [CrossRef]
  7. Christian, J.A.; Lightsey, E.G. Onboard image-processing algorithm for a spacecraft optical navigation sensor system. J. Spacecr. Rockets 2012, 49, 337–352. [Google Scholar]
  8. Christian, J.A. Optical navigation using planet’s centroid and apparent diameter in image. J. Guid. Control Dyn. 2015, 38, 192–204. [Google Scholar] [CrossRef]
  9. Mortari, D.; de Dilectis, F.; Zanetti, R. Position estimation using image derivative. In Proceedings of the AAS/AIAA Space Flight Mechanics Meeting Conference, Williamsburg, VA, USA, 12–15 January 2015.
  10. Taubin, G. Estimation of planar curves, surfaces and nonplanar space curves defined by implicit equations, with applications to edge and range image segmentation. IEEE Trans. PAMI 1991, 13, 1115–1138. [Google Scholar] [CrossRef]
  11. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  12. Mortari, D.; Samaan, M.A.; Bruccoleri, C.; Junkins, J.L. The pyramid star pattern recognition algorithm, ION. Navigation 2004, 51, 171–183. [Google Scholar] [CrossRef]
  13. Chernov, N. Circular and linear regression. Fitting circles and lines by least squares. In Monographs on Statistics and Applied Probability; CRC Press: Boca Raton, FL, USA, 2011; Volume 117. [Google Scholar]

Share and Cite

MDPI and ACS Style

Mortari, D.; De Dilectis, F.; Zanetti, R. Position Estimation Using the Image Derivative. Aerospace 2015, 2, 435-460. https://doi.org/10.3390/aerospace2030435

AMA Style

Mortari D, De Dilectis F, Zanetti R. Position Estimation Using the Image Derivative. Aerospace. 2015; 2(3):435-460. https://doi.org/10.3390/aerospace2030435

Chicago/Turabian Style

Mortari, Daniele, Francesco De Dilectis, and Renato Zanetti. 2015. "Position Estimation Using the Image Derivative" Aerospace 2, no. 3: 435-460. https://doi.org/10.3390/aerospace2030435

Article Metrics

Back to TopTop