High-Precision Heading Determination Based on the Sun for Mars Rover.
Spirit and Opportunity, the twin rovers of NASA's Mars Exploration Mission, landed on Mars in 2004. Some discoveries of the rovers, such as the evidence of liquid water on the surface of Mars, have been a source of excitement. Opportunity has been kept in service for 13 years and has travelled over 42.195 km, becoming a veritable marathon champion. High-precision attitude-determination technology of Mars rovers, especially the heading-determination technology, has played an important role in safe driving and the realization of scientific goals. Because there is no satellite navigation system like GPS on Mars, the rovers utilize Pancam cameras as sun sensors for heading, which can restrict the error growth of the IMU and improve the dead-reckoning accuracy . When the rover remains static, the sun sensor makes a 10-minute tracking observation of the sun, and the quaternion estimator (QUEST) method is used to calculate the attitude and heading, the precision of which reaches 1.5[degrees]. Because the field view of sun sensor on Spirit and Opportunity is only 16[degrees], a rotation platform with a dial is needed. The rover first rotates the sun sensor to the predicted elevation of the sun and then horizontally scans the sky to find the sun . Long-term searching and observation of the sun do not fulfill the real-time navigation requirement for Mars rovers, and the low-precision heading greatly affects the dead reckoning.
Several Jet Propulsion Laboratory (JPL) studies have also used the sun sensor for heading determination for Rocky 7 and FIDO field rover, but the method is similar to that of Spirit and Opportunity, and the precision in the field test is also to within a few degrees [3,4]. Deans et al. bundled a fish-eye camera and an inclinometer together, and the precision of the heading determination is superior to 1[degrees] . Most published heading-determination methods require long-duration observations only when inclinometer data is unavailable. Enright et al. and Furgale et al. use a digital sun sensor with 140[degrees] field of view and an inclinometer called HMR-3000 for heading, and practical tests on Earth indicate that the precision reaches 1[degrees] [6,7]. Yang et al. use a large-field-of-view sun sensor for heading determination, and the precision reaches 0.1123[degrees] when observing the sun for 30 minutes, which is the best reported precision so far . Illyas et al. present a novel algorithm for micro-planetary rover-heading determination using a low-cost sun sensor, and a large number of experiments show that the heading precision reaches 0.09[degrees] (1[sigma]), which plays an important role in reducing the accumulated heading error of MEMS sensors . In a GPS-denied environment, visual navigation can provide accurate localization , but the error grows sharply with the distance travelled. Lambert et al. develop a novel approach incorporating the sun sensor and inclinometer measurements directly into the visual odometry pipeline to reduce the error growth of path estimation, and the resulting localization error is only 1.1% of a 10-km distance travelled .
The Mars rover-heading determination method via the inertial navigation system and star sensor has also been thoroughly researched. A novel method based on star sensors and the strap-down inertial navigation system (SINS) is put forward to accurately determine the rover's position and attitude, and the initial position and attitude determination for planetary rovers by INS/Star Sensor integration is researched . A high-precision SINS/star sensor deeply integrated navigation scheme is effected by He et al. . In recent years, the inertial navigation system, star sensor, and visual navigation system have been integrated to improve the navigation accuracy and reliability . However, Mars has some atmosphere, which makes stars invisible in the daytime. Furthermore, the rovers always move during the day and rest at night, which makes the star sensor difficult to apply.
It is evident that heading determination by the sun will always play a significant role in Mars exploration rovers, and the precision of the heading strongly affects the navigation results. Because the rovers move with a maximum speed of 5 cm/s on the surface of Mars and are stationary most of time, this paper only considers the problem of static heading determination. The main focus of this paper is using only one image of the sun to achieve a heading-determination precision on the order of 1 arcminute. This paper is organized as follows. First, we introduce the basic theories of heading determination using the sun, and major factors that affect the precision of the heading are analyzed. Next, the sun sensor calibration model, sun sensor, inclinometer alignment model, and sun image centroid-extraction algorithm are described in detail. After that, our prototype is introduced and three field tests are conducted to verify our models, methods, and algorithms. Finally, we give the basic conclusion according to the results of the field tests.
2. Basic Theories of Heading Determination Using the Sun
In this section, we introduce the main concepts related to this paper and provide basic theories of heading determination using sun sensor, inclinometer, and local clock. Table 1 describes the main frames in this paper and gives their names, notations, and definitions.
Due to the objective conditions, all our field tests are conducted on Earth, so we only discuss the heading-determination problem in the Earth reference frame. Figure 1 provides the definitions and space relation of the important frames, and [omega] is the rovers heading to be estimated.
We assume that the sun vectors in the E and H frames are [S.sub.E] and [S.sub.H], respectively. [R.sup.H.sub.E] represents the rotation matrix from frame E to frame H, which can be written by
[S.sub.H] = [R.sup.H.sub.E] x [S.sub.E]. (1)
According to Figure 1, [R.sub.EH] can be derived by double rotations:
[R.sup.H.sub.E] = [R.sub.[gamma]](([phi] - 90[degrees]) x [R.sub.Z](G + [lambda] + 180[degrees]), (2)
where G is the Greenwich sidereal time calculated by the existing formula according to observation epoch, and [lambda] and [phi] are the longitude and latitude of the rover provided by dead reckoning on Mars. Because [S.sub.E] can be calculated by ephemeris such as DE405, we can obtain the sun vector [S.sub.H] relative to the local horizontal frame by (1). Furthermore, we can determine the sun's predicted azimuth [A.sub.H] relative to local north according to [S.sub.H].
Figure 2 gives the schematic of sun azimuth measurement using a sun sensor. Suppose the sun sensor is adjusted to be horizontal and the sun sensor frame C coincides with the transition frame T. Once a sun image is captured, we can calculate [A.sub.C] or [A.sub.T] according to sun image centroid coordinates, projection model, and distortion model. Then, the rover's heading [omega] can be calculated by
[omega] = [A.sub.H] - [A.sub.T]. (3)
It is evident that the precision of [A.sub.T] determines the heading results. In order to improve the measurement precision of the sun azimuth, we must strictly solve the following three problems:
(1) The sun sensor always utilizes large-field-of-view lens to avoid platform rotation, but comparable imaging distortion is introduced. An accurate distortion calibration model is needed.
(2) In practice, the sun sensor is difficult to keep absolutely horizontal, so a suitable and accurate algorithm for inclination correction must be seriously considered.
(3) Because the sun is a disk object, it always occupies a regular area on the image plane, which means a proper image-processing algorithm is needed to calculate the centroid of the sun image. Due to the large field of view, the sun sensor always suffers poor resolution. We must optimize the existing centroid-extraction algorithm to improve the centroid-extraction precision.
3.1. Sun Sensor Calibration Model
3.1.1. Error Equation. Because there is no regular control point on the surface of Mars, the sun sensor is usually calibrated in the laboratory before launch. We build a 10-m diameter dome, on the surface of which we uniformly install 37 super-bright optical fibers as control points. The 3D coordinates of the control points are surveyed by high-precision total station Leica TDRA6000, which has an orientation measurement precision of 0.5". Figure 3 shows the dome model and measurement sketch of the control points.
The purpose of sun sensor calibration is to obtain the parameters of the sun sensor, including the image principal point ([x.sub.0], [y.sub.0]), focal length f, radial distortion parameters ([k.sub.1], [k.sub.2], [k.sub.3]), rotation parameters (a, b, c), and translation parameters ([X.sub.0], [Y.sub.0], [Z.sub.0]). Figure 4 depicts the geometric or physical meaning of the 12 parameters.
([x.sub.0], [y.sub.0]), f, and ([k.sub.1], [k.sub.2], [k.sub.3]) are interior elements relative to the sun sensor; (a, b, c) and ([X.sub.0], [Y.sub.0], [Z.sub.0]) are exterior orientation elements relative to local horizontal frame; and [DELTA][theta] is the error of the half angle of view caused by radial distortion. Because the sun sensor utilizes a solid-angle projection, the real half angle of view is represented as
[theta] = 2 arcsin (r/2f) + [DELTA][theta]. (4)
Suppose (x, y) are coordinates of the image centroid of a control point. The polar distance r is represented as
r = [square root of ([(x - [x.sub.0]).sup.2] + [(y - [y.sub.0]).sup.2])]), (5)
and we adopt the polynomial model to describe the radial distortion :
[DELTA][theta] = [k.sub.1][(arcsin(r/2f)).sup.2] + [k.sub.2][(arcsin(r/2f)).sup.3]
+ [k.sub.3][(arcsin(r/2f)).sup.4]. (6)
The azimuth of the control point in the sun sensor frame is written as
[mathematical expression not reproducible], (7)
where [x.sub.p] = x - [x.sub.0] and [y.sub.p] = y - [y.sub.0]. Hence, the vector of the control point in the sun sensor frame is represented as
[mathematical expression not reproducible] (8)
and the vector of the control point in horizontal frame can be written as
[S.sub.H] = [[[X.sub.H] [Y.sub.H] [Z.sub.H]].sup.T]. (9)
We move the horizontal frame to make its origin coincide with that of the sun sensor frame, and the vector of the control point in the new frame becomes
[S.sub.H] = [[[X.sub.H] + [X.sub.0] [Y.sub.H] + [Y.sub.0] [Z.sub.H] + [Z.sub.0]].sup.T]. (10)
S is normalized as
[S.sub.0] = S/[absolute value of S] = [[X Y Z].sup.T]. (11)
The relationship between [S.sub.C] and [S.sub.0] can be described by a rotation matrix R, as follows:
[S.sub.0] = R x [S.sub.C]. (12)
Equation (12) is our basic observation equation of sun sensor calibration. Unlike previous studies, we first employ the antisymmetric matrix Q instead of Euler angles to express R, which is beneficial for reducing the amount of calculation through reduced use of trigonometric sines and cosines. It has been proven that all the rotation matrices can be expressed by an antisymmetric matrix as follows :
R = [(I - Q).sup.-1] (I + Q), (13)
where I and Q are written as
[mathematical expression not reproducible]. (14)
Then, (12) can be written as
(I - Q) x [S.sub.0] = (I + Q) x [S.sub.C]. (15)
[[[v.sub.1] [v.sub.2] [v.sub.3]].sup.T] represents the error vector of (15), and the error equation is expressed by
[mathematical expression not reproducible]. (16)
Equation (16) needs to be linearized before being solved. Appendix A gives the partial derivatives of [V.sub.i] with respect to the 12 parameters, which can be used to linearize (16). Because there are 37 control points, 37x3 linearized error equations can be obtained. We express the error equations in the form of a vector:
v = A x X - l, (17)
where v is the residual vector; A is coefficient matrix; X = [[a b c].sup.T], which is the vector of unknown parameters; and l is the free term vector. The least-square method is used to estimate the 12 parameters:
X = [([A.sup.T] A).sup.-1] [A.sup.T]l. (18)
3.1.2. Calibration Results. Figure 5 is an image of the 37 control points from our sun sensor introduced in Section 4.1. After threshold operation, we use the gray centroid method to detect the image centroids of the control points . Figure 6 depicts the residuals of the 37 image points after calibration.
The standard deviations of residuals along x and y axes are [+ or -] 0.169 pixels and [+ or -] 0.185 pixels, respectively. The calibration precision is similar to the method based on the 2D or 3D calibration board [4,18]. As the minimum altitude of control point is about 15% the effective calibration field of the sun sensor reaches 150[degrees]. Later, we put the sun sensor near the center of the dome and rotate it in 8 directions in order. In each direction, the sun sensor captures an image of the 37 control points for calibration. The results of the 6 interior parameters of the sun sensor in each direction are listed in Table 2.
In Table 2, the standard deviations of [x.sub.0], [y.sub.0], and f are [+ or -] 0.081 pixels, [+ or -] 0.097 pixels, and [+ or -] 0.068 pixels, respectively, which indicate high consistency of the calibration results in 8 directions. However, the mean values of ([x.sub.0], [y.sub.0]), f, and ([k.sub.1], [k.sub.2], [k.sub.3]) for the 8 directions cannot be adopted as the final results because of the strong correlations between f and ([k.sub.1], [k.sub.2], [k.sub.3]) and between ([x.sub.0], [y.sub.0]) and (a,b,c). The standard deviation of the residuals, which should be as small as possible, is a good criterion to choose the best set of parameters. Direction 4 achieves the smallest standard deviation of the residuals, so the fourth group of interior parameters should be the final result.
3.2. Sun Sensor and Inclinometer Alignment Model. Prior studies generally use the sun as a control point to determine the relationship between the sun sensor and inclinometer, which must wait for the sun to move across several long traces and obtain low-precision results due to having only one control point in the sky [6,7,19]. However, the 10-m diameter dome with 37 control points provides ideal conditions for the sun sensor and inclinometer calibration. Because the coordinates of the 37 control points are surveyed by the high-precision total station, they contain local gravity information, which can be used to determine the relationship between the sun sensor and inclinometer.
In Section 3.1, ([x.sub.0], [y.sub.0]), f, and ([k.sub.1], [k.sub.2], [k.sub.3]) are determined; hence, we use the least-square method to calculate (a,b,c) and ([X.sub.0], [Y.sub.0], [Z.sub.0]), which describe the position and attitude of the sun sensor frame relative to the horizontal frame. Then, the rotation matrix [R.sup.H.sub.C] from the sun sensor frame to the horizontal frame is calculated by Appendix C, and [R.sup.H.sub.C] can be described by 3-step rotations around the [X.sub.C], [Y.sub.C], and [Z.sub.C] axes in order:
(1) Rotate an angle of [gamma] around the [X.sub.C] axis.
(2) Rotate an angle of [psi] around the [Y.sub.C] axis.
(3) Rotate an angle of [kappa] around the [Z.sub.C] axis.
[R.sup.H.sub.C] is written as
[R.sup.H.sub.C] = [R.sub.Z]([kappa]) [R.sub.[gamma]]([psi]) [R.sub.X]([gamma]). (19)
Appendix C gives the expressions for [R.sub.Z]([kappa]), [R.sub.[gamma]]([psi]), and [R.sub.X]([gamma]). Then, (19) is expanded as
[mathematical expression not reproducible]. (20)
Here, we provide the expressions of [gamma] and [psi]:
[gamma] = -atan [R.sup.H.sub.C](3, 2)/[R.sup.C.sub.H](3, 3)
[psi] = asin [R.sup.H.sub.C](3, 1). (21)
When the outputs of the inclinometer are adjusted to zero, we rotate the sun sensor frame by steps (1) and (2); then the Z axis of the new frame opposes local gravity. In other words, if the outputs of the inclinometer are adjusted to zero and the angles [gamma] and [psi] are known, the relationship between sun sensor and inclinometer can be determined.
In the experiment of Section 3.1.2, we always adjust the legs of the prototype introduced in Section 4.1 to keep the outputs of the inclinometer close to zero (smaller than 1" in practice). Table 3 lists the results of [gamma] and [psi] in each direction.
Standard deviations of [gamma] and [psi] are [+ or -]2.6" and [+ or -]8.6", respectively. The mean values ([gamma] = 1124.5", [psi] = 134.1") are adopted as the final parameters. Table 3 indicates the high-precision relationship determination between the sun sensor and inclinometer, which is the basis of high-precision heading determination.
3.3. Sun Image Centroid-Extraction Algorithm. Because large-field sun sensors suffer from poor angular resolution, we must improve the precision of the sun image centroid extraction as much as possible. The gray centroid method is widely used for the sun image centroid extraction, but the precision is not high when the sun image is irregular [17,20]. Cui et al. consider the sun image as a circle and use the Sobel operator to detect the pixel-level edge; thus, the centroid is achieved by circle fitting of the edge points . Yang et al. adopt the Zernike moment to obtain the subpixel edge points and make the precision of the circular sun image centroid reach 0.07 pixel, which is obviously better than that of the gray centroid algorithm . However, for the general projection models, such as the pinhole and solid-angle models, when the sun is away from the boresight direction, the shape of the sun image is more similar to an ellipse than a circle. Our algorithm for sun image centroid extraction is similar to , but the difference is that both circle and ellipse sun image are considered, and a reasonable criterion for shape judgment is put forward. Our algorithm is realized by 6 steps:
(1) The Sobel operator is used to detect the pixel-level edge points (x, y) of the sun image.
(2) For each pixel-level edge point, the Zernike moment is used to detect the subpixel edge point (x', y'). Appendix C gives the computation method in detail. Figures 7(a) and 7(b) show a practical circular and elliptical sun image, respectively, from our fish-eye sun sensor and its edge-detection results. Obviously, the Zernike moment produces a smoother edge than the Sobel operator, which is beneficial for improving the centroid fitting precision.
(3) According to the subpixel edge points (x', y'), the center of circle is fitted and the value of the cost function [J.sub.c] is calculated.
[J.sub.c]([x.sub.c], [y.sub.c],r) = [n.summation over (i=1)][[absolute value of [([x'.sub.i] - [x.sub.c]).sup.2] + [([y'.sub.i] - [y.sub.c]).sup.2]|.sup.2] (22)
where [v.sub.i] is residual. Suppose the initial values of the unknown parameters are [X.sub.0] = [[[x.sub.c0] [y.sub.c0] [r.sub.0]].sup.T]; thus, (23) is linearized as
[v.sub.i] = [a.sub.i][delta][x.sub.c] + [b.sub.i][delta][y.sub.c] + [c.sub.i][delta]r - [l.sub.i] (23)
[mathematical expression not reproducible]. (24)
The error equations of all the edge points can be written in the form of a matrix, as follows:
v = A x [delta]X - l, (25)
where v is the residual vector, A is the coefficient matrix, [delta]X is the correction vector corresponding to [X.sub.0], and l is the free term vector. The expanded forms of the four matrices and vectors are
v = [[[v.sub.1] [v.sub.2] ... [v.sub.n]].sup.T], (26)
[mathematical expression not reproducible], (27)
[delta]X = [[[delta][x.sub.c] [delta][y.sub.c] [delta]r].sup.T], (28)
l = [[[l.sub.1] [l.sub.2] ... [l.sub.n]].sup.T]. (29)
We assume all the edge points have the same weight, and then [delta]X is calculated by 
[delta]X = [([A.sup.T] A).sup.-1] [A.sup.T]l. (30)
The unknown parameters are updated by X = [X.sub.0] + [delta]X, and iterations are needed until the absolute values of [delta]X are smaller than 0.001 pixels. Then, [J.sub.c] can be calculated according to (22).
(4) According to the subpixel edge points, the ellipse's center is fitted and the value of the cost function [J.sub.e] is calculated.
= [n.summation over (i=1)][[absolute value of [Ax'.sup.2.sub.i] + [Bx'.sub.i][y'.sub.i] + [Cy'.sup.2.sub.i] + [Dx'.sub.i] + [Ey'.sub.i] + 1].sup.2] (31)
A, B, C, D, and E are the basic parameters of the ellipse that need to be estimated, and the ellipse's center ([x.sub.e], [y.sub.e]) can be calculated by
[x.sub.e] = BE - 2CD/4AC - [B.sup.2]
[y.sub.e] = BD - 2AE/4AC - [B.sup.2]. (32)
The error equation of each edge point is represented as
[v.sub.i] = [Ax'.sup.2.sub.i] + [Bx'.sub.i][y'.sub.i] + [Cy'.sup.2.sub.i] + [Dx'.sub.i] + [Ey'.sub.i] + 1. (33)
Equation (33) is linear; therefore, the coefficient matrix A and free item vector l are written as
[mathematical expression not reproducible], (34)
l = [[-1 -l ... -1].sup.T]. (35)
Suppose the vector of unknown parameters is X = [[A B C D E].sup.T], which can be calculated by X = [([A.sup.T]A).sup.-1] [A.sup.T]l. Then, [x.sub.e] and [y.sub.e] are derived by (32), and we use (31) to calculate [J.sub.e].
(5) The shape of the sun image is judged by comparing [J.sub.c] to [J.sub.e]. If [J.sub.c] < [J.sub.e], ([x.sub.c], [y.sub.c]) should be adopted. Otherwise, ([x.sub.e], [y.sub.e]) should be adopted.
(6) The precision is estimated. The root-mean-square error (RMSE) of the centroids can be calculated by
RMSE = [square root of ([m.sup.2.sub.x] + [m.sup.2.sub.y])]. (36)
[m.sub.x] and [m.sub.y] are the RMSE of centroid coordinates ([x.sub.c], [y.sub.c]) or ([x.sub.e], [y.sub.e]), which can be calculated according to the residual vector v and coefficient matrix A.
Figure 8 is the flow chart of our centroid-extraction algorithm for the sun image.
3.4. Heading-Determination Algorithm. Without loss of generality, we consider the X axis of the prototype to be aligned with that of the charge-coupled device (CCD), and the dip angles of the electronic inclinometer are always adjusted to zero in the field test. Then, we take 4 steps to calculate the heading:
(1) Extract the centroid of a sun image using the method mentioned in Section 3.3 and calculate the vector [S.sub.C] of the sun with respect to the sun sensor frame according to the intrinsic parameters of the sun sensor.
(2) Rotate the camera coordinate system by [R.sub.X]([gamma]) and [R.sub.Y]([psi]); then calculate the vector of the sun in the transition frame [S.sub.T] by
[mathematical expression not reproducible]. (37)
The azimuth [A.sub.T] of the sun relative to the X axis of the new coordinate system can be calculated by [S.sub.T] according to 7). We then use the Naval Observatory Vector Astronomy Software version 3.0 (NOVAS3.0) and DE405 ephemeris to calculate the sun's predicted azimuth [A.sub.H]. The observation epoch in UTC is provided by local clock.
(3) Calculate the heading [omega] according to (3) and estimate the precision.
It is clear that only one sun image is needed to calculate the heading in our algorithm, which is beneficial for reducing the data-processing time. Figure 9 shows the flow chart of our heading-determination algorithm, where the contents of the red boxes are the key algorithms presented in Section 3.
4. Field Tests and Results
In order to test our algorithms and methods for high-precision heading determination, three field tests were conducted in central China's Henan province in 2017. The details of the tests are presented in this section, including the hardware configuration, test conditions, and test results.
4.1. Hardware Configuration. Our sun sensor is composed of a fish-eye lens, a CCD, and a filter. The fish-eye lens is AF DX made by Nikon, with 10.5-mm focal length and 180[degrees] field of view. The CCD is Alta U9000 made by Apogee, with a 3,056-by-3,056 array of 12-micron square pixels and a 7 square-inch and 3-inch thick body. The filter is mounted between the lens and CCD to weaken the light of the sun. The sun sensor can capture images of the sun without searching, which means that no rotating platform is needed.
The Leica Nivel230 electronic inclinometer is chosen to obtain the dip angles relative to local horizontal plane, and the precision is up to 1 arcsecond with a measurement range of -3.78 arcminutes to +3.78 arcminutes, a weight of 700 g, and a 3.5-square-inch and 2.7-inch thick body. We choose the SA.45s chip-scale atomic clock made by Microsemi with a month aging rate of [3.sup.-10] s, low power consumption of 120 mW, and volume of 17 cc. The ARK-1122F embedded computer with a dual Atom-core processor is used for processing data including sun images, dip angles, and time information.
We develop a prototype by integrating the sun sensor, inclinometer, and chip-scale atomic clock. Figure 10 shows a photograph of our prototype.
Each test was conducted by the following steps:
(1) Put the prototype on a pillar and adjust the legs of the prototype to make the outputs of the inclinometer close to zero.
(2) Keep the prototype static and continuously shoot images of the sun. During shooting, the inclinometer data and time data are collected together.
(3) Use the heading-determination algorithm presented in this paper to calculate the heading.
(4) Estimate the heading-determination precision.
4.2. Test Conditions. Detailed conditions of the three tests are presented in Table 4, including the date and time, observation times, sun elevation fluctuation, and weather.
The three tests were conducted in different months in order to analyze the impact of the sun elevation on the heading determination. On July 20, the maximum elevation of the sun was up to 75.7[degrees], which means the sun imaging positions were close to the principal point on the image plane. On Oct 14 and Nov 11, the maximum elevation of the sun was 46.7[degrees] and 37.7[degrees], respectively. This means that the sun imaging positions were away from the principal point on the image plane. Figure 11 shows the time series of the predicted elevation of the sun. Additionally, during the first and third tests, thin clouds or fog sheltered the sensor from the sun, which slightly impacts the observation of the sun.
4.3. Results. Because the real heading is difficult to obtain, we consider the mean value of the measured heading as a reference for precision estimation. Figure 12 depicts the time series of the heading errors of the three tests.
In the first test, the heading errors fluctuate greatly with an amplitude of 2.5 arcminutes. However, in the second and third tests, the heading errors become quite small, and the amplitude is only 1 arcminute. The standard deviations of the three tests are [+ or -] 0.97', [+ or -] 0.30', and [+ or -]'0.28', respectively. It is evident that the second and third tests achieve more precise heading results. It appears that the elevation of the sun has a great impact on the heading determination. Because the sun elevation has a strong correlation with the sun imaging position, Figure 13 directly shows the trajectories of the sun image centroids of the three tests on the image plane.
On July 20, the trajectory of the sun image centroids was close to the principal point with a mean distance of 220.6 pixels. However, the trajectories on Oct 14 and Nov 11 were 642.8 pixels and 785.8 pixels away from the principal point, respectively. Obviously, radial errors of the sun image centroids have no effect on the sun azimuth measurement, whereas it is linearly influenced by tangential errors. Because the tangential error of the sun image centroid is a small quantity, the error of the sun azimuth measurement can be expressed as follows:
[mathematical expression not reproducible], (38)
where [e.sub.c] is the tangential error of sun image centroid and r is the polar distance of sun image centroid relative to the principal point. The RMSE series of the sun image centroids, which are calculated by fitting the residuals of the subpixel edge points of the sun image, are depicted in Figure 14.
Figure 14 shows that the RMSE of the sun image centroids are similar in the three tests, which indicates that the sun elevation is irrelevant to the sun image centroid extraction. The RMSE of several sun image centroids in the first and third tests are up to 0.1 pixels, probably due to the cloudy or foggy weather. However, the mean RMSE is about 0.065 pixels, which indicates the superiority of our sun image centroid-extraction algorithm. The tangential mean RMSE can be estimated by 0.065/[square root of (2)] = 0.046pixels. Supposing that [e.sub.c] = 0.046 pixels, [r.sub.1] = 220.6 pixels, [r.sub.2] = 642.8 pixels, and [r.sub.3] = 785.8 pixels, we can calculate the measurement errors of the sun azimuth. The results are [mathematical expression not reproducible], which essentially coincide with the standard deviations of the heading error series in Figure 12.
Additionally, the heading error series of the three tests are not totally random, and some trend in the variation is obvious. This is mainly caused by the residual distortion after sun sensor calibration, alignment error of the sun sensor and inclinometer, and periodic change in the weather. As the minimum imaging period of the sun sensor is 10.8 s and the heading calculation time is less than 0.1 s, we conclude that just one sun image can produce a 1-arcminute heading using our prototype.
This paper attempts to improve the heading-determination precision to 1-arcminute level for Mars rovers using only one sun image. Algorithms for the sun sensor calibration, sun sensor and inclinometer alignment, sun image centroid extraction, and heading determination are presented in detail. A prototype is developed, and the results of three ground-based field tests indicate that the precision of heading reaches 0.28-0.977 (1[sigma]), which is the best reported precision for heading determination in Mars rovers so far. We not only improve the heading precision to the 1-arcminute level, but also reduce the observation time for the sun from 10 to 30 minutes to about 10 seconds.
However, some questions still need to be studied in the future. (1) When the outputs of the electronic inclinometer are not strictly adjusted to zero, we need a proper model to modify it. (2) Because the rover works on Mars for several or tens of years, we want to utilize the stars as control points to calibrate the sun sensor and inclinometer. (3) Dust on the sun sensor, resulting from dust storms on Mars, and clouds obscuring the view of the sun may produce irregular sun images, which have an impact on the precision of the heading. We are trying to use the robust estimation method to adjust the weight of the edge points to improve the centroidextraction precision.
A. Partial Derivatives of Vi with respect to the 12 Parameters
Partial derivatives of [A.sub.C] and [theta] with respect to [x.sub.0] and [y.sub.0] are
[partial derivative][A.sub.C]/[partial derivative][x.sub.0] = -y + [y.sub.0]/[r.sup.2]
[partial derivative][A.sub.C]/[partial derivative][y.sub.0] = x + [x.sub.0]/[r.sup.2] (A.1)
[mathematical expression not reproducible] (A.2)
where S = arcsin(r/2/) and [partial derivative]S/[partial derivative]r, [partial derivative]S/[partial derivative]f, [partial derivative]r/[partial derivative][x.sub.0], and [partial derivative]r/[partial derivative][y.sub.0] are written as
[partial derivative]S/[partial derivative]r = 1/[square root of (4[f.sup.2] - [r.sup.2])]
[partial derivative]S/[partial derivative]f = -r/f[square root of (4[f.sup.2] - [r.sup.2])] (A.3)
[partial derivative]r/[partial derivative][x.sub.0] = [x.sub.0] - x/r
[partial derivative]r/[partial derivative][y.sub.0] = [y.sub.0] - y/r (A.4)
Partial derivatives of [A.sub.C] and [theta] with respect to f are
[partial derivative][A.sub.C]/[partial derivative]f = 0 (A.5)
[partial derivative][theta]/[partial derivative]f = (2 + 2[k.sub.1]S + 3[k.sub.2][S.sup.2] + 4[k.sub.3][S.sup.3])[partial derivative]S/[partial derivative]f (A.6)
Partial derivatives of [A.sub.C] and [theta] with respect to [k.sub.i] (i = 1,2,3) are
[partial derivative][A.sub.C]/[partial derivative][k.sub.i] = 0
[partial derivative][theta]/[partial derivative][k.sub.i] = [(arcsin(r/2f)).sup.i+1] (A.7)
Partial derivatives of [S.sub.C] with respect to [x.sub.0] and [y.sub.0] are
[mathematical expression not reproducible] (A.8)
[mathematical expression not reproducible] (A.9)
Partial derivatives of [S.sub.C] with respect to f and [k.sub.i] (i = 1,2,3) are
[mathematical expression not reproducible] (A.10)
Partial derivatives of [S.sub.0] with respect to [X.sub.0], [Y.sub.0], and [Z.sub.0] are
[mathematical expression not reproducible] (A.11)
where d = [square root of ([([X.sub.H] + [X.sub.0]).sup.2] + [([Y.sub.H] + [Y.sub.0]).sup.2] + [([Z.sub.H] + [Z.sub.0]).sup.2])]. According to equations above, it is easy to obtain the partial derivatives of [V.sub.i] with respect to 12 fish-eye sun sensor parameters.
B. Basic Rotation Matrix
Rotation matrix around X, Y, and Z axis can be represented by
[mathematical expression not reproducible] (B.1)
[mathematical expression not reproducible] (B.2)
[mathematical expression not reproducible] (B.3)
C. Zernike 7x7 Models
We use Zernike 7x7 models deduced by Gao et al. as follows :
[mathematical expression not reproducible] (C.1)
For each pixel-level edge point (x, y) obtained by Sobel operator, we use [M.sub.11]R, [M.sub.11]I, and [M.sub.20] for convolution operations. Then we get three important Zernike moments as [Z.sub.11]R, [Z.sub.11]I, and [Z.sub.20], and rotation angle [omega] is calculated by
[omega] = atan [Z.sub.11]I/[Z.sub.11]R (C.2)
and the length l from center point to subpixel edge point is obtained by
l = [Z.sub.20]/[Z.sub.11]R cos [omega] + [Z.sub.11]I sin [omega] (C.3)
Hence we can get the coordinates of the subpixel edge point as follows:
[mathematical expression not reproducible] (C.4)
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This research was supported by the National Natural Science Foundation of China (NSFC) under Grant nos. 41704006 and 41504018 and funded by State Key Laboratory of GeoInformation Engineering under Grant no. SKLGIE2016-Z-21.
 W. Gong, "Discussions on localization capabilities of MSL and MER rovers," Annals of GIS, vol. 21, no. 1, pp. 69-79, 2015.
 K. S. Ali, C. A. Vanelli, J. J. Biesiadecki et al., "Attitude and position estimation on the Mars Exploration Rovers," in Proceedings of the IEEE Systems, Man and Cybernetics Society, Proceedings--2005 International Conference on Systems, Man and Cybernetics, pp. 20-27, USA, October 2005.
 R. Volpe, "Mars rover navigation results using sun sensor heading determination," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'99), pp. 460-467, Kyongju, South Korea.
 A. Trebi-Ollennu, T. Huntsberger, Y. Cheng, E. T. Baumgartner, B. Kennedy, and P. Schenker, "Design and analysis of a sun sensor for planetary rover absolute heading detection," IEEE Transactions on Robotics and Automation, vol. 17, no. 6,pp. 939-947, 2001.
 M. C. Deans, D. Wettergreen, and D. Villa, "A Sun Tracker for Planetary Analog Rovers," in Proceedings of the The 8th International Symposium on Artificial Intelligence, Robotics and Automation in Space, p. 603, Munich, Germany, 2005.
 J. Enright, P. Furgale, and T. Barfoot, "Sun sensing for planetary rover navigation," in Proceedings of the 2009 IEEE Aerospace Conference, USA, March 2009.
 P. Furgale, J. Enright, and T. Barfoot, "Sun sensor navigation for planetary rovers: Theory and field testing," IEEE Transactions on Aerospace and Electronic Systems, vol. 47, no. 3, pp. 1631-1647, 2011.
 P. Yang, L. Xie, and J. Liu, "Simultaneous celestial positioning and orientation for the lunar rover," Aerospace Science and Technology, vol. 34, no. 1, pp. 45-54, 2014.
 M. Ilyas, B. Hong, K. Cho, S.-H. Baeg, and S. Park, "Integrated navigation system design for micro planetary rovers: Comparison of absolute heading estimation algorithms and nonlinear filtering," Sensors, vol. 16, no. 5, 2016.
 Y. Kim, W. Jung, and H. Bang, "Visual target tracking and relative navigation for unmanned aerial vehicles in a GPS-denied environment," International Journal of Aeronautical and Space Sciences, vol. 15, no. 3, pp. 258-266, 2014.
 A. Lambert, P. Furgale, T. D. Barfoot, and J. Enright, "Field testing of visual odometry aided by a sun sensor and inclinometer," Journal of Field Robotics, vol. 29, no. 3, pp. 426-444, 2012.
 X. Ning, L. Liu, J. Fang, and W. Wu, "Initial position and attitude determination of lunar rovers by INS/CNS integration," Aerospace Science and Technology, vol. 30, no. 1, pp. 323-332, 2013.
 Z. He, X. Wang, and J. Fang, "An innovative high-precision SINS/CNS deep integrated navigation scheme for the Mars rover," Aerospace Science and Technology, vol. 39, pp. 559-566, 2014.
 X. Ning, M. Gui, Y. Xu, X. Bai, and J. Fang, "INS/VNS/CNS integrated navigation method for planetary rovers," Aerospace Science and Technology, vol. 48, pp. 102-114, 2015.
 C.-H. Li, Y. Zheng, C. Zhang, Y.-L. Yuan, Y.-Y. Lian, and P.-Y. Zhou, "Astronomical vessel position determination utilizing the optical super wide angle lens camera," Journal of Navigation, vol. 67, no. 4, pp. 633-649, 2014.
 J. Yao, B. Han, and Y. Yang, "Applications of Lodrigues matrix in 3D coordinate transformation," Geomatics and Information Science of Wuhan University, vol. 31, no. 12, pp. 1094-1119, 2006.
 Y. Zhan, Y. Zheng, and C. Zhang, "Celestial Positioning with CCD Observing the Sun," in China Satellite Navigation Conference (CSNC) 2013 Proceedings, vol. 245 of Lecture Notes in Electrical Engineering, pp. 697-706, Springer Berlin Heidelberg, Berlin, Heidelberg, 2013.
 Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
 M. Ilyas, K. Cho, S. H. Baeg, and S. Park, "Absolute Navigation Information Estimation for Planetary Rovers," in Proceedings of Asian-Pacific Conference on Aerospace Technology and Science, Jeju Island, p. 1, 2015.
 Y. Zhan, Y. Zheng, C. Zhang, G. Ma, and Y. Luo, "Image centroid algorithms for sun sensors with super wide field of view," Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, vol. 44, no. 10, pp. 1078-1084, 2015.
 P. Cui, F. Yue, and H. Cui, "Attitude and position determination scheme of lunar rovers basing on the celestial vectors observation," in Proceedings of the 2007 IEEE International Conference on Integration Technology, ICIT 2007, pp. 538-543, China, March 2007.
 P. Yang, L. Xie, and J.-L. Liu, "Zernike moment based high-accuracy sun image centroid algorithm," Yuhang Xuebao/Journal of Astronautics, vol. 32, no. 9, pp. 1963-1970, 2011.
 L. F. Sui, L. J. Song, and H. Z. Cai, Error Theory and Foundation of Surveying Adjustment, Surveying and Mapping Press, Beijing, China, 2010.
 S.-Y. Gao, M.-Y. Zhao, L. Zhang, and Y.-Y. Zou, "Improved algorithm about subpixel edge detection of image based on Zernike orthogonal moments," Zidonghua Xuebao/ Acta Automatica Sinica, vol. 34, no. 9, pp. 1163-1168, 2008.
Yinhu Zhan (1,2) Shaojie Chen (2,3) and Donghan He (2)
(1) State Key Laboratory of Geo-Information Engineering, Xian, China
(2) Zhengzhou Institute of Surveying and Mapping, Zhengzhou, China
(3) National Time Service Center, Chinese Academy of Sciences, Xian, China
Correspondence should be addressed to Shaojie Chen; email@example.com
Received 16 March 2018; Accepted 27 June 2018; Published 1 August 2018
Academic Editor: Jean-Pierre Barriot
Caption: Figure 1: Space relation of the important frames.
Caption: Figure 2: Schematic of sun azimuth measurement.
Caption: Figure 3: 10-m diameter dome and measurement sketch of control points.
Caption: Figure 4: Geometric or physical meaning of the 12 sun sensor parameters.
Caption: Figure 5: Image of the 37 control points.
Caption: Figure 6: Residuals of the fish-eye sun sensor calibration.
Caption: Figure 7: Two classic sun images and their edge-detection results.
Caption: Figure 8: Flow chart of centroid extraction of the sun image.
Caption: Figure 9: Flowchart of heading-determination algorithm.
Caption: Figure 10: Actual picture of the prototype.
Caption: Figure 11: Times series of the predicted elevations of the sun.
Caption: Figure 12: Time series of the heading errors.
Caption: Figure 13: The trajectories of sun image centroids.
Caption: Figure 14: Mean square error series of sun image centroids.
Table 1: Definition of main frames. Name Notation X axis Horizontal frame H Local north Earth-centered inertial frame E Vernal equinox Earth-centered fixed frame F Prime meridian Transition frame T Rover heading Inclinometer frame I X axis of itself Sun sensor frame C Horizontal pixels Name Y axis Z axis Horizontal frame Local west Opposite gravity Earth-centered inertial frame Right-handed North pole Earth-centered fixed frame Right-handed North pole Transition frame Right-handed Opposite gravity Inclinometer frame Y axis of itself Right-handed Sun sensor frame Right-handed Optical axis Table 2: Results of the 6 interior parameters in 8 directions. Raw azimuth [x.sub.0] y.sub.0] f [k.sub.1] (pixel) (pixel) (pixel) 0[degrees] 1502.292 1585.044 855.242 0.212471 45[degrees] 1502.209 1584.992 855.142 0.221231 90[degrees] 1502.280 1585.034 855.156 0.211819 135[degrees] 1502.365 1585.170 855.201 0.181307 180[degrees] 1502.293 1585.219 855.047 0.161114 225[degrees] 1502.209 1585.248 855.101 0.160012 270[degrees] 1502.133 1585.201 855.086 0.191343 315[degrees] 1502.141 1585.089 855.214 0.203631 Raw azimuth [k.sub.2] [k.sub.3] Std Std ([DELTA]x) ([DELTA]y) 0[degrees] -0.740583 0.877803 0.1621 0.1728 45[degrees] -0.764446 0.897239 0.1604 0.1852 90[degrees] -0.734883 0.870884 0.1788 0.1553 135[degrees] -0.653738 0.807112 0.1640 0.1481 180[degrees] -0.594148 0.755274 0.1454 0.1674 225[degrees] -0.594534 0.758023 0.1547 0.1603 270[degrees] -0.678503 0.825247 0.1763 0.1502 315[degrees] -0.713076 0.853851 0.1809 0.1521 Table 3: Values of [gamma] and [psi] in 8 directions. Direction Approximate [gamma](") [psi](") number azimuth 1 0[degrees] 1123.9 121.7 2 45[degrees] 1120.6 142.1 3 90[degrees] 1124.1 139.5 4 135[degrees] 1126.6 122.5 5 180[degrees] 1129.3 129.2 6 225[degrees] 1124.7 135.1 7 270[degrees] 1122.1 144.0 8 315[degrees] 1125.0 138.4 Table 4: Test conditions. Date Beijing Time Observations Sun Elevation July 20, 2017 12:05-13:01 252 74.3-75.7[degrees] Oct 14, 2017 11:42-13:12 250 44.3-46.7[degrees] Nov 11, 2017 11:59-13:13 200 35.6-37.7[degrees] Date Weather July 20, 2017 Thin Clouds Oct 14, 2017 Sunny Nov 11, 2017 Thin Fog
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Research Article|
|Author:||Zhan, Yinhu; Chen, Shaojie; He, Donghan|
|Publication:||Advances in Astronomy|
|Date:||Jan 1, 2018|
|Previous Article:||The Observer's Guide to the Gamma-Ray Burst Supernova Connection.|
|Next Article:||Distribution Inference for Physical and Orbital Properties of Jupiter's Moons.|