Next Article in Journal
Dynamic Multi-Attribute Decision-Making Method for Risk-Based Ship Design
Previous Article in Journal
Protective Effects of a Combined Herbal Medicine against Amyotrophic Lateral Sclerosis-Associated Inflammation and Oxidative Stress
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spacecraft and Asteroid Thermal Image Generation for Proximity Navigation and Detection Scenarios

by
Matteo Quirino
* and
Michèle Roberta Lavagna
Department of Aerospace Science and Technology, Politecnico di Milano, 20156 Milano, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(13), 5377; https://doi.org/10.3390/app14135377
Submission received: 15 April 2024 / Revised: 8 June 2024 / Accepted: 18 June 2024 / Published: 21 June 2024
(This article belongs to the Section Aerospace Science and Engineering)

Abstract

:
On-orbit autonomous relative navigation performance strongly depends on both sensor suite and state reconstruction selection. Whenever that suite relies on image-based sensors working in the visible spectral band, the illumination conditions strongly affect the accuracy and robustness of the state reconstruction outputs. To cope with that limitation, we investigate the effectiveness of exploiting image sensors active in the IR spectral band, not limited by the lighting conditions. To run effective and comprehensive testing and validation campaigns on navigation algorithms, a large dataset of images is required, either available or easy to obtain in the visible band, not trivial and not accessible for the thermal band. The paper presents an open-source tool that exploits accurate finite volume thermal models of celestial objects and artificial satellites to create thermal images based on the camera dynamic. The thermal model relies on open CFD code (OpenFOAM), pushed to catch the finest details of the terrain or of the target geometries, and then the temperature field is processed to compute the view factors between the camera and each face of the mesh; thus, the radiative flux emitted by each face is extracted. Such data feed the rendering engine (Blender) that, together with the camera position and attitude, outputs the thermal image. The complete pipeline, fed by the orbiting target and the imaging sensor kinematic, outputs a proper synthetic thermal image dataset, exploitable either by a relative navigation block or any other scope of research. Furthermore, in the same framework, the article proposes two different thermal sensor models but any sensor model can be applied, providing full customization of the output. The tool performance is critically discussed and applied for two typical proximity scenarios, asteroid and artificial satellite; for both cases, the challenges and capabilities of the implemented tool for synthetic thermal images are highlighted. In the end, the tool is applied in a phase B mission design sponsored by ESA and in related research works; for such cases, the results are reported in the article.

1. Introduction

Thermal images have an advantage over optical images since no light sources are needed. This has a tremendous impact on the navigation reliability of a space mission, enabling the spacecraft to see the dark regions of planets and asteroids or detect other spacecraft in the dark.
As an example, operating around an asteroid during proximity phases requires continuous pointing and navigation even during night phases. Thermal cameras can guarantee the detectability of the shape of the asteroid in all conditions. Furthermore, the biggest advantage of thermal images over optical ones is that they enable the detection of objects in the deep sky more easily despite the lower instrument resolution [1,2]. Indeed, optical cameras have more pixels which translate into bigger spatial resolution and lower detection limits for the size of the objects, but optical images capture all the star lights with a resulting image full of bright spots other than the target. By looking at real thermal images for the detection of asteroids and planets, the target is the only bright spot in the image, making the detection much easier for navigation purposes when the images are processed in a real-time closed loop [3].
These are just the two biggest advantages of thermal images; many other applications are made possible thanks to thermal infrared (TIR) images such as site selection in dark regions, deployable satellite tracking and navigation, and target marking [1]. Indeed, there is novel literature on the application of thermal infrared images for navigation in adverse illumination conditions in proximity scenarios, for example, for active debris removal [4,5,6]. In the same context, an interesting application of TIR images is their combination with visible (VIS) images using fusion algorithms to retain the advantages of both sensors (i.e., high definition for VIS and robustness in low illumination conditions for TIR); on this matter the interested reader can refer to [2,7,8].
Despite the unquestionable advantages of TIR images, the literature on how to create synthetic thermal images is still scarce, but at the same time, the possibility to create thermal images is fundamental for any mission involving thermal cameras for the following reasons:
  • Thermal image datasets can be created to train and test navigation algorithms in all mission phases.
  • Test software and hardware in the loop electronics, such as image processing boards.
  • Simulate the scientific output of the mission.
Given the importance of the above points, the paper describes a method to create thermal images starting from high geometrical accuracy thermal simulation results and reports the application of the method in two different scenarios: the first one where the target is a spacecraft, and the second one where the target is an asteroid. The proposed method uses a 3D finite volume method to compute the temperature field of the object, accounting for all the thermal properties of the different materials of the target. The generation of the thermal image is then performed using a rendering engine, and finally, two thermal camera models are proposed as future starting points to simulate the actual output of the instrument.
The structure of the work is as follows: In Section 2, we report the modeling of the infrared radiation and the infrared camera models. In Section 3, the method to generate the infrared images is presented and applied to a spacecraft and to an asteroid scenario. In Section 4, the method is applied to real case scenarios to show the actual capability of the method and its importance in a real space mission design. In the end, Section 5 gathers the conclusions of the presented work.

2. Thermal Modeling

This section reports the major steps for the creation of the thermal image starting from the finite volume thermal model up to the thermal camera model output.

2.1. Finite Volume Thermal Model

The temperature field of the object is computed using the finite volume open source code OpenFOAM. The code takes as input the geometry of each component of the target in the .stl file format. The actual computational mesh is created using such files and for each of the components, the material properties are assigned as well as the thermal contact resistances. The solar radiation direction and intensity are provided as input, and starting from a guessed initial temperature field for each component, the finite volume method is applied to solve for the temperature field. The code offers the geometrical accuracy required to create realistic thermal images and the flexibility needed to interface with the rendering engine. For a detailed description of the code and the numerical testing, the reader can refer to [8,9]. The code has been compared to real thermal vacuum test data in a CubeSat scenario proving to be reliable in computing the temperature field; moreover, preliminary mono-material cases using such code for thermal image generation are presented in [2,10,11].
For the multi-region case presented in the article, different properties are assigned to the different parts of the object. From the resulting temperature field, each mesh face temperature is available together with the face orientation, which is fundamental for the view factor computation as reported in the next sections.

2.2. Radiation Modeling

There are two types of thermal cameras: photon counters and microbolometers [12]. The presented research work focuses on the latter one, being more versatile and more robust with respect to the former ones. Furthermore, they have a wide space heritage. For more details, the reader can refer to [13,14,15]. Microbolometer pixels are sensitive to the heat flux that hits the pixel; thus, the goal is to compute the heat flux per unit area emitted by each mesh face in the frequency spectrum of the thermal camera. For microbolometers, it is the Long Wavelength Infrared (LWIR) spectrum from 8 μ m to 14 μ m ; therefore, the heat flux emitted in this range by each face over a hemisphere [16], i.e., emissive power [ W   m 2 ], is given by [13]:
q o u t ( T ) = F ( T ) = π 0 ε ( λ ) B ( λ , T ) R ( λ ) d λ ,
where ε ( λ ) is the emissivity, B ( λ , T ) is the Max Planck equation, and R ( λ ) is a function limited between 0 and 1. It represents the detection efficiency of the bolometer in the LWIR range, the transmittance of the band-pass filter, and the transmittance of the germanium lens. An example of the shape of this function is reported in [13].
In order to unify the notation with ([16], Section 10.1), the emissive power can be rewritten in terms of intensity of radiation, i [ W   m 2   sr 1 ]:
q o u t ( T ) = π 0 ε ( λ ) B ( λ , T ) R ( λ ) d λ = π i .
The next step is to compute the portion of heat flux emitted by an infinitesimal part of a generic mesh face and intercepted by an infinitesimal part of a generic camera pixel, hence introducing the view factor between the mesh face and the pixel [16].
An infinitesimal area of a mesh face d A f emits heat flux in all directions, distributing it over a hemisphere with a total amount described by Equation (2). An infinitesimal area of the pixel of a camera d A p intercepts a portion of that hemispheric heat flux, and based on the solid angle d ω p , it subtends with respect to d A f . Indeed, the heat flux that leaves d A f within d ω p stays within d ω p as it travels towards d A p . If d A p is placed at a distance r = | r f p | with an angle β f p with respect to the normal of d A f and it is normal to the radial direction r ^ f p = r f p / | r f p | , then d A p will see d A f as having an area of cos β f p d A f . At the same time, if d A p is not normal to the radial direction r ^ f p , the actual area d A f will see cos β p f d A p , where β p f is the angle between the radial direction r ^ f p and the normal of d A p , hence the approximated projection (assuming d A p infinitesimal) of d A p onto the hemisphere of radius r. A visualization of the angles is reported in Figure 1.
Using such a description, the heat flux emitted by d A f and intercepted by d A p can be written as:
d Q f p = i d ω p cos β f p d A f .
Using the definition of a solid angle, d ω p can be written as the portion of the area of the sphere over the radius of the sphere squared. The portion of the area of the sphere to be used is the projection of d A p onto the sphere, and hence cos β p f d A p , pictured in Figure 2. Thus, Equation (3) can be written as:
d Q f p = i cos β p f d A p | r f p | 2 cos β f p d A f ,
where r f p is the position vector that goes from d A f towards d A p , hence the position of d A p with respect to d A f . Introducing the normal versor of d A f and d A p as n ^ f and n ^ p respectively, then cos β f p and cos β p f can be written as a simple scalar product between the normal versor and the position vector, normalized, paying attention to invert the position vector for d A p (i.e., r ^ p f instead of r ^ f p ). A visualization of the formulation is reported in Figure 3. Thus, the equation can be written as:
d Q f p = i ( n ^ f · r f p ) ( n ^ p · r p f ) | r f p | 4 d A f d A p .
The intensity of radiation can be rewritten inverting Equation (2); therefore, the final expression for the heat flux emitted by d A f and intercepted by d A p is the following:
d Q f p = 0 ε ( λ ) B ( λ , T ) R ( λ ) d λ ( n ^ f · r f p ) ( n ^ p · r p f ) | r f p | 4 d A f d A p .
Given the small size of the pixels and their close positions, it is assumed that the view factors between one mesh face and all the camera pixels are equal; thus, it is sufficient to compute the view factor between the mesh face and the full camera array. Thus, the formulation becomes:
d Q f c = 0 ε ( λ ) B ( λ , T ) R ( λ ) d λ ( n ^ f · r f c ) ( n ^ c · r c f ) | r f c | 4 d A f d A c .
The formula needs to be integrated over the camera area and over the mesh face. This step is discretized, assuming the mesh face is small enough to write the expression as:
Δ Q f c = 0 ε ( λ ) B ( λ , T ) R ( λ ) d λ ( n ^ f · r f c ) ( n ^ c · r c f ) | r f c | 4 Δ A f Δ A c .
In the expression above, the approximation is in the fact that the mesh face normal vector is assumed constant over the small area of the real object it is representing, and therefore, it is a good approximation only when the mesh face is small compared to the size of the object. Indeed, when the object geometry is meshed, it is a multitude of small flat faces. For what concerns the camera, it is assumed to be a flat surface; hence, it can be approximated as one flat face with a given area Δ A c and normal vector n ^ c . To be more specific, the just mentioned approximations are introduced in the step between Equation (7) and Equation (8), where the part of the equation regarding the view factor is approximated using the following formula:
A c A f ( n ^ f · r f c ) ( n ^ c · r c f ) | r f c | 4 d A f d A c ( n ^ f · r f c ) ( n ^ c · r c f ) | r f c | 4 Δ A f Δ A c .
As verified in [9], such an approximation is able to catch the correct values of the view factors. Dividing by the mesh face and considering a single diffuse value for the emissivity, the equation provides the heat flux per unit area emitted by each face and intercepted by the camera:
q f c = Δ Q f c Δ A f = ε 0 B ( λ , T ) R ( λ ) d λ ( n ^ f · r f c ) ( n ^ c · r c f ) | r f c | 4 Δ A c .
To compute the heat flux, the camera pose and attitude (i.e., r ^ f p and n ^ c ) must be provided to the algorithm as well as the different emissivities of each part of the mesh. The final result is a field of the heat flux emitted by each mesh face and intercepted by the camera. In this way, the image accounts for the view factor effects of the scene. The field is then passed to the render engine (Blender v2.93, https://www.blender.org/, accessed on 21 March 2024) for the actual image generation. As a general recap of the method workflow, the pipeline is summarized in Figure 4, where it is possible to see that the method starts from the geometry of the object, the thermal simulation then produces a mesh and a temperature field of the object, and they are both used to compute the view factors given the camera attitude and position. Once the view factors are computed, it is possible to compute the actual heat flux received by the camera using Equation (10) and generate the image. More details on the actual pipeline implementation and the respective results are presented in the next sections.

2.3. Thermal Camera Model

Microbolometer pixels produce an output voltage that is proportional to the heat flux received. The voltage is then amplified with a gain and converted to a digital bit string. The conversion is performed by the Analog Digital Unit (ADU) of the camera, and the number of discretization intervals depends on the bit architecture of the ADU; for example, for a 16 bit architecture, there will be 2 16 intervals also called Digital Number (DN). The pixel output then is a DN whose value is proportional to the heat flux received.
The approach for modeling the camera is to start from the image produced by the rendering software (Blender v2.93, https://www.blender.org/, accessed on 21 March 2024). By knowing the bit depth of the image (in Blender either 8 or 16 bit) and the scale values, it is possible to associate the pixel DN of the image produced by Blender with the corresponding heat flux value. Then, using the sensor response curve, i.e., how the DN is proportional to the incoming heat flux, it is possible to compute the actual DN produced by the camera. From there, the noise can be added, and the output, in DN, is the actual image produced by the camera. The details of the workflow are described in Section 2.4.
At this point, two scenarios are possible based on the thermal camera manufacturing, and that is detection and radiometry. For the radiometry case, as described in the next section, the goal is to retrieve the temperature field of the target, so once the actual DN is obtained, the sensor response function can be inverted again, and the corresponding noisy heat flux is obtained, and the radiative temperature recovered.
For the detection case, the goal is to create an image with high contrast with no interest at all in recovering the temperature field. This is achieved by rendering an image with a scale proportional to the DN computed with the proposed method and tuning the scale to reach such a goal. The interested reader can refer to the FLIR-Tau2 user guide (Teledyne FLIR, FLIR Systems, Inc., Wilsonville, OR, USA, https://www.flir.com/support/products/tau-2/?pn=Tau+2&vn=46336100H#Documents, accessed on 21 March 2024) in the Automated Gain Control algorithm section for more information on this point. In the presented case, the q f c is converted into a DN field, and the scale is properly set.

2.4. Radiometry

To accurately recompute the temperature field, the distance from the object must be provided in order to be able to use the calibration curve of the thermal camera, usually given at a very short distance from the target [14,15].
The sensor function can be modeled in various ways. From the literature, here we report two sensor functions and their respective proposed implementations.

2.4.1. DN–Temperature Sensor Function

The first approach links DN to temperature with the following relation [15]:
D N = a · T α + b .
To compute a, b, and α , a calibration is performed with a black body at different temperatures. The coefficient values are reported in [15]. To use such a model, the heat flux is recovered for each pixel using the image scale values, and then it is converted into the corresponding radiative temperature assuming the object as a black body and accounting for the object distance.
The effect of the distance is taken into account, as reported in Equation (12), in the computation of T r a d . If the distance is not compensated, then the equivalent T r a d values would be extremely low. Then, the value is fed into the sensor function to compute the corresponding DN. Such values are modified according to the noise models, and the noisy temperature is recovered, inverting the sensor function (Equation (11)).
The scale of the image can then be adjusted accordingly to the selected temperature range. The steps are summarized in Equation (12):
q f c T r a d = q f c π | r f c | 2 σ Δ A c 1 / 4 D N = a · T r a d α + b D N n o i s y = D N + N o i s e T r a d + n o i s e = D N n o i s y b a 1 / α .
It must be noted that in using such an approach, the computation of the received heat flux must be coherent with the thermal camera response function, meaning that we need to write q f c using the same input that is taken by the function that provides the DN. For example, in this case, the function D N = a · T α + b , uses T as the input; thus, we cannot use Equation (10) because (as explained in the next paragraph) if we invert the equation to recover the emitted heat flux, we obtain F ( T ) , when instead we need to recover T. Thus, a slightly different formulation of q f c that allows obtaining T is needed. A workaround, in this case, is given by the following equation, which has been used in Equation (10):
q f c = ε σ T r a d 4 π ( n ^ f · r f c ) ( n ^ c · r c f ) | r f c | 4 Δ A c .
In using this approach, the sensors attenuation caused by the optical lenses and the sensor responsivity (i.e., the parameter R ( λ ) in Equation (10)) is contained directly in the formula D N = a · T α + b in the exponent α as stated in [15].
Given the fact that the camera response is directly proportional to the heat flux received, it is preferred to use the thermal camera model presented in the next paragraph, where the third calibration parameter α can be eliminated by simply calibrating the camera with the total emissive power of the emitter.

2.4.2. DN-Heat Flux Sensor Function

The second approach follows the same steps as the previous one but using the following formula for the pixel output as [13]:
D N = a · F ( T ) + b ,
where F ( T ) is used to keep the same notation of [13], and it is the emissive power:
F ( T ) = q o u t ( T ) = π 0 ε ( λ ) B ( λ , T ) R ( λ ) d λ .
As for the previous case, a and b coefficients are computed by calibration using black bodies at different temperatures. For the exact coefficient values, please refer to [13]. The steps are summarized as follows:
q f c F ( T ) = q f c π | r f c | 2 Δ A c D N = a · F ( T r a d ) + b D N n o i s y = D N + N o i s e F ( T ) n o i s y = D N n o i s y b a .
With the above formulation, the radiative temperature can be recovered using the sensor calibration curve [13], where each F n o i s y ( T r a d ) value corresponds to a radiative temperature value. As a recap, the workflow is pictured in Figure 5, where the pipeline starts from the Blender rendered image, which is in 16 bit, so each pixel has a DN value that goes from 0 to 2 16 . Since we know the maximum heat flux value of the scene and the minimum one, the pixel with the maximum DN value can be associated with the maximum heat flux value, and the same goes for the minimum one. In this way, any DN value of a pixel of the image can be associated with a q f c value. In other words, the scale of the image is converted from DN values to q f c values; therefore, every pixel now corresponds to a q f c value. With such values, it is possible to compute F ( T ) for each pixel, which is the input variable for the camera model described in Equation (14); therefore, by knowing the gain and the offset of the thermal sensor function, i.e., a and b, it is possible to compute the exact DN value that the given thermal camera will produce. In conclusion, the DN value of each pixel is converted in the actual DN values produced by the given thermal camera by using Equation (14). With such an image, the noise of the given thermal camera can be added to produce the realist output of the thermal sensor. For more details on the noise model and characterization of thermal sensors, please refer to [17]. The final step is now possible, and that is to reconvert the DN just computed for each pixel and with the noise added in F ( T ) n o i s y values so that by knowing the dependency with the temperature [13], it is possible, for each pixel, to obtain the radiative temperature values.
In both approaches, in the inversion of Equation (10), the part relative to the faces–camera orientation is left out in order to retain the view factor effects in the final output, and it can be observed in the results in the next section. Indeed, it is not impossible to know the exact shape of the object a priori and compensate for the view factors effects caused by the orientation of each face.
The temperature field can always be corrected with the real emissivity of the material, if known, thus recovering the kinetic temperature. Starting from the radiative temperature, the kinetic one can be computed as:
T k i n = T r a d ε 1 / 4 .
Figure 5. Radiometry camera model workflow.
Figure 5. Radiometry camera model workflow.
Applsci 14 05377 g005

2.5. Detection

For the detection case, the camera does not need to be calibrated; thus, the output DN is taken as directly proportional to the heat flux received by the camera q f c . Therefore, the formulation is the following:
q f c D N = a · q f c + b D N n o i s y = D N + N o i s e .
As stated before, the goal is just to obtain the actual DN produced by the camera. Using this method, the q f c can be translated into just a DN field, and the scale of the image can be set in order to have the best contrast between the target and background. Commercially available thermal cameras such as FLIR-Tau2 (https://www.flir.it/products/tau-2/, accessed on 21 March 2024) are sold, indeed, also without calibration, and they have image processing algorithms that can set the scale in non-uniform ways around the image to increase the target contrast. The interested reader can refer to the user guide of FLIR-Tau2 (https://www.flir.com/support/products/tau-2/?pn=Tau+2&vn=46336100H#Documents, accessed on 21 March 2024) for more details on such algorithms. As a recap, the workflow of the method is presented in Figure 6, where the first part of the pipeline is identical to the one presented in Figure 5, and that is to associate a q f c value to each pixel of the image. Starting from the image produced in Blender, since the maximum and minimum heat flux values q f c are known, they are associated with the maximum and minimum DN values of the pixels in the image; in this way, the scale of the image is converted from DN values into q f c values, and each pixel now corresponds to a q f c value. With such values, each pixel q f c value can be converted into the actual DN value produced by the given thermal camera based on the sensor model of Equation (17). In this conversion, the respective noise model can be added [17], and therefore the actual D N n o i s y value produced by the given thermal camera is computed for each pixel, reproducing the actual output of the thermal camera.

2.6. Method Implementation

The method is implemented thanks to the flexibility of the opensource code OpenFOAM, as well as the one of Blender. Indeed, the temperature field is exported in a VTK format (https://vtk.org/, accessed on 21 March 2024), a popular format for scientific visualization of the results. On the other side, Blender, thanks to a free add-on named VTKNodes (https://github.com/tkeskita/BVtkNodes, accessed on 21 March 2024), is able to import such a file into its node architecture for image rendering. In particular, a Python v3.9 code takes the temperature VTK file, within which the mesh is stored, as well as all the camera data, i.e., pose, pixel size, and sensor size. With such information, the radiative temperature field that is emitted by the target and received by the camera q f c is computed and stored in a different VTK file. This latter file is the one imported in Blender and used for the actual rendering. The overall pipeline is illustrated in Figure 7.
Once q f c is computed, it is possible to use it for the application of any thermal camera model. Indeed, the results presented in the section focus on such a quantity, as everything in the final thermal image is proportional to it. Furthermore, since q f c is scaled by the emissivity value ε of the material, it is clear that ε has a big impact on the final result as reported in the following subsections. The presented images are noiseless and for thermal images, the black and white scale is set from the minimum to the maximum heat flux values of the scene, producing a realistic thermal image, upon which, given the presented thermal camera models, the actual output of the instrument can be produced. For such considerations, since the goal is to test the pipeline, the computation of q f c is the one reported in Equation (18), where the optics attenuation is not considered:
q f c = ε σ T r a d 4 π ( n ^ f · r f c ) ( n ^ c · r c f ) | r f c | 4 Δ A c .
It is a simplification because, with the limited wavelength range of the bolometer, the measured heat flux is not necessarily proportional to T 4 , but since the image can always be scaled to the maximum heat flux observed in the scene, the final image is not affected. Furthermore, it does not affect the pipeline structure or the details of the final image, thus demonstrating the flexibility of the method and its ability to produce relevant data without a specific camera model. Indeed, the full-camera characterization (calibration and noises) is not always available, and the pipeline can cope with that. Having said so, this simplification shall be used just to test the pipeline and not to produce thermal images for real-case scenarios. The pipeline is currently applied with the complete formulation that considers the optics attenuation and the noise of the camera for the thermal camera data available from Hayabusa [1,3,13,14]. The results using such a thermal camera model are reported in Section 4.3. The proposed method has been thoroughly tested under multiple environmental conditions with different heat flux ranges, different combinations of material properties, and background disturbances. The interested reader can refer to [17] for more details on this point and for more information on the application of the pipeline using the complete formulation with optics attenuation and camera noise.

3. Results

The section presents the results of the application of the method reported in Section 2. The goal of the application is to test the ability of the pipeline to catch all the geometrical details of the scene and the ability to reproduce the effects of the different materials emissivities as well as of the view factors. The pipeline can be tuned with any camera model and noise model; thus, given a specific thermal camera, it allows the correlation of the generated synthetic thermal images, but this is left as a future step.

3.1. Spacecraft Case

The TANGO satellite is used as test case because the future goal is to create a thermal image dataset equivalent to the ones already available in the optical spectrum [18] so that, given a mission with both visible and thermal cameras, the detection and navigation algorithm can be compared.

3.1.1. Geometry and Materials

The exact TANGO spacecraft topology is not accessible; nonetheless, a representative geometry made of multiple parts is created, and each part is assigned with an emissivity value. It is reported in Figure 8, and it is made of 205 solar cells, 5 panels, 5 antennas, 7 patch antennas, and 1 coil with 2 supports for a total of 225 parts. The emissivities of each part are reported in Table 1.

3.1.2. Results

All the thermal images presented are produced using a circular orbit at 3 m from the center of TANGO with the camera in a center pointing attitude. The first thing to highlight in the results is the effect of the different components’ emissivities. As is clearly visible for a given camera pose reported in Figure 9, the solar cells are isothermal with the support panel, making them invisible in the temperature field image (left image in Figure 9), but due to the difference between the emissivity of the solar cells and of the panels, respectively 0.86 and 0.40, the heat flux received by the camera is completely different, thus creating a sharp contrast in the final thermal image (right image in Figure 9), where the solar cells shapes are indeed visible.
The same goes for the patch antenna, completely invisible as isothermal with the lateral panel (left image in Figure 9); it stands out clearly in the thermal image (right image in Figure 9). Furthermore, due to the different orientations of the mesh faces of the patch antenna with respect to the camera, given its curved shape, it is possible to appreciate the effects of the view factors on the amount of heat flux intercepted by the camera. As the face normal is aligned closer to the camera normal, the color is brighter, and vice versa. The same consideration goes for other curved shapes, such as the cylindrical antennas of TANGO (top right, right image in Figure 9).
Figure 9. Tango temperature field (left) and radiance field intercepted by the camera q f c (right).
Figure 9. Tango temperature field (left) and radiance field intercepted by the camera q f c (right).
Applsci 14 05377 g009
The effects of the emissivities and view factors are even more visible for an arbitrary sequence of images taken from an orbit around the spacecraft pointing its center (Figure 10). As the solar panel normal gets closer to being perpendicular to the camera normal, the intercepted heat flux lowers down as reported in the second and fourth rows of Figure 10. To show the different orientations of the panels, the frames are not taken at equal intervals; thus, the second and fourth rows of the frames have different angles with respect to the camera, even though they look symmetric. This also highlights how small-angle differences in the camera–object system can affect the results. To better clarify the last point, two different orientations are reported in Figure 11, where we can see the drop in the received heat flux due to the different orientation of the solar panel; the same considerations go for all the panels of the spacecraft. In conclusion, the actual output of the pipeline is a black and white image that is sent to the navigation chain, and thus, the final grayscale image is reported in Figure 12.

3.1.3. Spacecraft Case Conclusions

The goal of the method was to create a dataset of thermal infrared images of a spacecraft in order to be able to test navigation algorithms. The reported results show that all the details of the geometry of the target, the effects of the multiple materials emissivities, and the effects of the different view factors between the camera and the object geometries are retained in the final thermal image. In this way, the method can produce a realistic thermal images dataset to start testing the performance of navigation algorithms in the infrared spectrum. The interested reader can refer to [10,17] to see the usage of such images for navigation purposes. In particular, the ASTRA Polimi research team is now focused on testing the use of such an image in combination with VIS images in order to cover the phase of proximity operations in which there are low or adverse illumination conditions as reported in [10] and also reported in Section 4.1.
It must be highlighted that the method as it is presented is still not validated; therefore, the accuracy of the thermal images is not guaranteed. Nonetheless, given the details of the output and the qualitative correctness, it can be used right away as a starting point.
Figure 10. Temperature field (left) radiance field q f c (right) for orbit at 5 m around TANGO.
Figure 10. Temperature field (left) radiance field q f c (right) for orbit at 5 m around TANGO.
Applsci 14 05377 g010
Figure 11. In the last row, the received radiative heat flux reduces due to the view factors when surface normals are almost parallel with the camera normal. Temperature field (left) and radiance field intercepted by the camera q f c (right).
Figure 11. In the last row, the received radiative heat flux reduces due to the view factors when surface normals are almost parallel with the camera normal. Temperature field (left) and radiance field intercepted by the camera q f c (right).
Applsci 14 05377 g011
Figure 12. Radiance field q f c (left) infrared image black and white image (right).
Figure 12. Radiance field q f c (left) infrared image black and white image (right).
Applsci 14 05377 g012
The validation of the method requires the calibration of the thermal camera and the correlation of real thermal images taken in a controlled environment with the ones produced by the method. The whole method is built to be fully tuned; thus, the camera calibration curve, i.e., the function that provides the DN, can be easily modified inside the pipeline. The ASTRA Polimi team is now working on such topics; the interested reader can refer to the ESA e.Inspector project for future developments (more on this in Section 4).
For what concerns the computational time for the generation of thermal images, it must be highlighted that it strictly depends on the number of faces of the mesh, as all the view factors for each camera pose must be computed in order to calculate the actual infrared heat flux received by the camera. Once the heat flux is computed, the rendering of the image through Blender depends on the performance of the GPU mounted on the PC. As an order of magnitude, for the TANGO spacecraft case, the mesh has six million faces and it takes six hours in total to compute the heat flux for two hundred camera poses and then render such images.

3.2. Asteroid Case

For the asteroid case, Ryugu has been selected, as thermal images from the Hayabusa mission are available, thus enabling the future validation of the results. Furthermore, Ryugu geometry is freely available; on the other hand though, all the files have corrupted geometry that must be cleaned before the thermal simulation.

3.2.1. Geometry and Materials

Two geometry models are considered: the 800K faces model and the 3M faces model. Both models have corrupted geometry, where for corrupted geometry, we consider all unrealistic sudden changes in face normals and face deformations; some examples of corrupted geometry are reported in Figure 13. Thus, starting from the 800K model, the first step is to clean the geometry to perform a smooth thermal simulation.
A fast approach to cleaning the geometry is to apply a smoothing filter to the whole asteroid using Blender sculpting modifiers. Although the result is perfect for the mesh generation for the thermal analysis, there is an evident loss in the terrain detail as reported in Figure 14; therefore, this solution is retained just to check and test the numerical setup of the thermal simulation, but it is discarded for the final output, as the goal is to retain all the terrain details to produce an image close to the real one as much as possible.
Analyzing the 3M model, no differences can be spotted in Figure 15 but given the smaller faces, the 3M better approximate the terrain curves and details, easing the meshing phase of the simulation. On the other hand, as for the 800K model, corrupted geometry is still present, and it is actually worse, as the smaller faces increase details also in the errors of the geometry as shown in Figure 16.
Since the goal is to produce a realistic image, it is decided to use the 3M model, as it provides a more accurate mesh. To clean the geometry without losing too much detail, a weighted paint smoothing filter is applied just on the corrupted regions. This method is a Blender sculpting technique, and consists in painting the region that must be smoothed with a color that is proportional to the smoothing strength, so the hottest color corresponds to a heavy smoothing and the cold colors to a lighter one. All the regions that have been healed are reported in Figure 17, where it is possible to see all the regions with corrupted geometry.
The final result is reported in Figure 18; from the picture, it is possible to verify that the realism of the geometry is retained, and at the same time, the geometry is clean enough to be meshed for the thermal analysis.
Figure 17. Smoothing filter weights on the Ryugu 3M model. Red corresponds to the maximum smoothing available, down to green for the minimum, and violet for zero smoothing applied.
Figure 17. Smoothing filter weights on the Ryugu 3M model. Red corresponds to the maximum smoothing available, down to green for the minimum, and violet for zero smoothing applied.
Applsci 14 05377 g017
The number of cells on the Ryugu exterior face is increased in the snapping parameters to overcome radiation illness of the linear system produced in the finite volume discretization process. Due to the low conductivity of the asteroid regolith, at the surface of Ryugu, where radiation and diffusion are combined, radiation overcomes diffusion for each cell at that surface. This creates instability due to the big oscillations caused by the errors in the temperature field that are elevated to the power of four in the code. To better clarify the problem, the code is benchmarked with a cube of 12 m and with different mesh discretizations. The analysis of such a problem is out of the scope of the research work, but in conclusion, reducing the cell size and using relaxation is enough to stabilize the algorithm. Given the problem though, different mesh resolutions were tested to verify the results were correct at least from a numerical point of view.
The final simulation is single material, and it is run in steady-state mode; moreover, to avoid losing convergence in the first steps, the simulation is started with a high conductivity value at 270 W m−1 K−1. In this way, the diffusion contribution in the finite volume discretization is larger than the radiation term and the simulation can start in a smoother way. Indeed, once convergence is reached, it is lowered to 1 W m−1 K−1 (whilst keeping the infrared emissivity at 0.97 and the visible absorptivity at 0.95 [19,20]). Using a high conductivity before the very low one of the asteroid ensures a starting temperature field closer to the final one, hence reducing the initial temperature oscillations given by the guessed starting temperature field. All the mesh sizes tested to prove the numerical convergence of the grid are reported in Table 2.

3.2.2. Results

All the thermal images presented are produced using a circular orbit at 3 k m from the center of Ryugu with the camera in a center pointing attitude. The final temperature field is reported in Figure 19. The level of detail that can be reached from the tool is very high, but of course in this case, the computational cost is much higher with respect to the spacecraft case.
As for the spacecraft case, the view factors play a big role. To better clarify this point, a sequence of frames is taken using a circular orbit around the asteroid with the camera facing the center. From Figure 20, it is possible to see how the rocks and faces parallel to the camera create bright contrasts, producing a realistic thermal image. In Figure 21, it is clear how the curved shape of the object lowers the view factors, moving from the equator region towards the polar region; as a consequence, the heat flux intercepted by the camera is higher when looking at the equator. In conclusion, an example of the final output in black and white scale is reported in Figure 22.

3.2.3. Asteroid Case Conclusions

As for the spacecraft case, the goal of the method was to produce a thermal image retaining all the details of the terrain for an asteroid geometry. The reported results show that the pipeline is able to create images with a very high level of detail, producing realistic thermal images of complex terrain geometries. The pipeline therefore enables us to start testing navigation algorithms in such scenarios. As for the spacecraft case, it must be pointed out that the method is still not validated; thus, the accuracy of the generated images is not guaranteed. Nonetheless, given the ability to deal with such a level of geometry complexity and the qualitative correctness of the results, this is a good starting point. In particular, as for the spacecraft case, the ASTRA Polimi team is focused on the application of such images in combination with VIS images for covering the proximity phases, in which VIS images have low or adverse illumination conditions.
In order to validate the thermal images of Ryugu, the thermal model must be tuned and run in transient mode to recreate the exact temperature field, and the thermal images must be recreated with the exact thermal camera parameters and noise levels used in the Hayabusa mission. The pipeline is perfectly able to do that, but it is out of the scope of this article which focuses entirely on the method.
Figure 20. Temperature field (left), radiance field q f c (right).
Figure 20. Temperature field (left), radiance field q f c (right).
Applsci 14 05377 g020
Figure 21. Ryugu temperature field (left), and radiance field q f c (right).
Figure 21. Ryugu temperature field (left), and radiance field q f c (right).
Applsci 14 05377 g021
Figure 22. Radiance field q f c (left), infrared black and white image (right).
Figure 22. Radiance field q f c (left), infrared black and white image (right).
Applsci 14 05377 g022

4. Applications

This section presents the application of the method in real research projects. Again, the tool has proved to be fundamental to verify the analyses of the different research projects. Three different applications are presented: the first one is on the fusion of TIR images with visible ones to increase the visibility of the target. The second one focuses on a far range detection in the context of an ESA phase B space mission on debris detection and approach. The very last case shows how the method can be applied also for the generation of the Earth background.

4.1. Image Fusion

The preliminary results of the tool presented in Section 2 were applied in [2,10,11] for image fusion with the goal to enhance optical image visibility using thermal infrared camera data. The context of such research work was focused on facing problems like the one presented in Figure 23c, where in a proximity scenario, the target object, in this case, the TANGO satellite, enters a low visibility condition such as the eclipse phase, where the object is in complete shadow. This is exactly one of the problems described in the introduction in Section 1.
In this scenario, the thermal image generation pipeline is able to provide the requested thermal images with the same camera attitude of the visible one, thus enabling the fusion of the two. As it is visible from the results reported in Figure 24, the visible image has parts that are in complete shadow and cannot be used by the algorithms to fully detect the shape of the object, thus missing features that can be used as key points for the navigation algorithms. Such regions are completely visible in the thermal infrared image, so fusing the thermal image with the visible one provides a third image that retains the detail of the visible spectrum image (thermal cameras usually have smaller pixel arrays, thus fewer details) but without shadowed regions. In Figure 24, the best fusion methods are reported; for deeper analysis and comparison of different fusion methods, the reader can refer to [2].

4.2. Detection

The tool is used also for far-range detection, where the advantages of the thermal image are clearer. The object analyzed in this case is the VESPA (VEga Secondary Payload Adapter) debris at 100 m distance. The object is pictured in Figure 25, and the interested reader can refer to [11,18] for more information.
The context of the research work in which the method is applied is a Phase B mission funded by ESA, named e.Inspector, whose main goal is to perform proximity orbits around the non-cooperative object to demonstrate the navigation capability of dedicated algorithms.
Figure 25. VESPA adapter (left) and thermal model (right).
Figure 25. VESPA adapter (left) and thermal model (right).
Applsci 14 05377 g025
The generated synthetic thermal image is compared with the respective visual one in Figure 26 (picture sizes are different as the visible camera and TIR camera have different pixel array sizes). As stated in the introduction, the visible image captures all the star lights, making it harder to detect the target compared to the thermal infrared image, where it is a bright light point, which means the image processing algorithms might take more iterations, and thus more time, to detect and isolate the object in the image. The same goes for long-exposure images in Figure 27, (credits to M.Bechini for the images in the visible spectrum [18]). Even though it is possible to remove the stars from the images by knowing the camera pose and using star catalogs, it is a time-consuming operation and it increases the overhead of the image processing in a real-time navigation scenario, thus leaving the thermal image the faster way to detect the target for this case.
Of course, in a real scenario as reported in [3], the camera noise level and response function must be modeled as presented in Section 2 to clearly asses the detection range of the object. The tool is able to take these data as input together with the noise models or noise maps once the full camera characterization is complete. Nonetheless, if the response function and noise data are not available, the tool is able to simulate and provide the team with a realistic input that can be used to test on ground the navigation and detection chain, something that is fundamental to access the next phases of the mission.
Figure 26. Visible image with stars (left) and thermal image (right).
Figure 26. Visible image with stars (left) and thermal image (right).
Applsci 14 05377 g026
Figure 27. Inertial pointing visible image with stars (left) and thermal image (right).
Figure 27. Inertial pointing visible image with stars (left) and thermal image (right).
Applsci 14 05377 g027

4.3. Earth Background

The method is also applied to generate the thermal image of the Earth as well as the one of the target as shown in Figure 28. This demonstrates again the flexibility of the tool and its applicability to any scenario. Furthermore, thanks to this application, the detection algorithms can be tested to check if they are able to find the target when Earth is in the background Figure 29. The research work on this topic is still ongoing, and it will be presented in a dedicated research outcome. For more information, the interested reader can refer to [17].

5. Conclusions

The presented tool proposes a methodology to create synthetic thermal infrared images, enabling the simulation of the infrared camera output given the camera position and attitude. It is demonstrated how the tool creates realistic thermal images in both spacecraft and asteroid proximity scenarios and both in single and multi-material cases, proving to be suited for multiple mission cases. Indeed, the tool has already been used in a real space mission design phase B, proving to be an essential tool for testing navigation algorithm pipelines. Furthermore, two camera models are presented, and they can be used as starting points for the production of realistic instrument output. The presented work is a step in the existing literature, as no methodologies to produce synthetic infrared images from thermal simulations have ever been presented, up to the authors’ knowledge. The approach can be repeated and implemented by any other research team since all the software used are open source and free of license, thus making the method available to anyone interested in the research.

Author Contributions

Conceptualization, M.Q.; Methodology, M.Q.; Software, M.Q.; Writing—original draft, M.Q.; Visualization, M.Q.; Supervision, M.R.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank Gaia Letizia Civardi and Michele Bechini from the ASTRA research group of the Department of Aerospace Science and Technology of Politecnico di Milano for the use in this article of the fused images in the visible spectrum of TANGO, and Michele Bechini again for the use of the VESPA images in the visible spectrum. The authors thank Lucia Bianchi, again from ASTRA Department of Aerospace Science and Technology of Politecnico di Milano, for the use of the infrared images of the Earth and TANGO with Earth in the background. The authors thank Manfredo Gherardo Guilizzoni, Luigi Vitali, and Roberta Caruana from the Department of Energy of Politecnico di Milano for checking and reviewing the infrared heat flux modeling.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LPALumped Parameter Approach
TIRThermal Infrared
FVMFinite Volume Method
CFDComputational Fluid Dynamics
ECSSEuropean Cooperation for Space Standardization
HERMESHigh Energy Rapid Modular Ensemble of Satellites
GRBGamma Ray Burst
TPTechnological Pathfinder
SPScientific Pathfinder
INAFIstituto Nazionale di Astrofisica
SMService Module
PLPayload
SDDSilicon Drift Detectors
GMMGeometrical Mathematical Model
PCBPrinted Circuit Board
GAGGGadolinium Aluminum Gallium Garnet
CADComputer-Aided Design
DMDemonstration Model
FMFlight Model
FVTMFinite Volume Thermal Model
TBTThermal Balance Test
TVACThermal Vacuum Chamber
BEEBack End Electronics
FEEFront End Electronics
PSUPower Supply Unit
PDHUPayload Data Handling Unit
TCThermocouple
FDIRFault Detection and Recovery
BCBoundary Condition
ADUAnalog Digital Unit
DNDigital Number
FOVField Of View
Ph.D.Philosophiae Doctor
LWIRLong Wavelength Infrared
VISVisible

References

  1. Okada, T. Thermography of Asteroid and Future Applications in Space Missions. Appl. Sci. 2020, 10, 2158. [Google Scholar] [CrossRef]
  2. Civardi, G.L.; Bechini, M.; Quirino, M.; Colombo, A.; Piccinin, M.; Lavagna, M. Generation of fused visible and thermal-infrared images for uncooperative spacecraft proximity navigation. Adv. Space Res. 2023, 73, 5501–5520. [Google Scholar] [CrossRef]
  3. Okada, T.; Fukuhara, T.; Tanaka, S.; Taguchi, M.; Arai, T.; Senshu, H.; Demura, H.; Ogawa, Y.; Kouyama, T.; Sakatani, N.; et al. Earth and moon observations by thermal infrared imager on Hayabusa2 and the application to detectability of asteroid 162173 Ryugu. Planet. Space Sci. 2018, 158, 46–52. [Google Scholar] [CrossRef]
  4. Yilmaz, O.B.; Aouf, N.; Majewski, L.; Sánchez-Gestido, M.; Ortega, G. Using infrared based relative navigation for active debris removal. In Proceedings of the 10th International ESA Conference on Guidance, Navigation and Control Systems, Salzburg, Austria, 29 May–2 June 2017. [Google Scholar]
  5. Jiang, J.; Chen, X.; Dai, W.; Gao, Z.; Zhang, Y. Thermal-Inertial SLAM for the Environments with Challenging Illumination. IEEE Robot. Autom. Lett. 2022, 7, 8767–8774. [Google Scholar] [CrossRef]
  6. Tao, J.; Cao, Y.; Ding, M.; Zhang, Z. Visible and Infrared Image Fusion-Based Image Quality Enhancement with Applications to Space Debris On-Orbit Surveillance. Int. J. Aerosp. Eng. 2022, 2022, 6300437. [Google Scholar] [CrossRef]
  7. Piccinin, M. Spacecraft Relative Navigation with Electro-Optical Sensors around Uncooperative Targets. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 2022. Available online: https://hdl.handle.net/10589/196597 (accessed on 21 March 2024).
  8. Quirino, M. Novel Thermal Images Generator for Autonomous Space Proximity operations. Ph.D. Thesis, Politecnico di Milano, Milan, Italy, 2023. Available online: https://hdl.handle.net/10589/213812 (accessed on 21 March 2024).
  9. Quirino, M.; Marocco, L.; Guilizzoni, M.; Lavagna, M. High Energy Rapid Modular Ensemble of Satellites Payload Thermal Analysis Using OpenFOAM. J. Thermophys. Heat Transf. 2021, 35, 715–725. [Google Scholar] [CrossRef]
  10. Bechini, M.; Civardi, G.L.; Quirino, M.; Colombo, A.; Lavagna, M. Robust Monocular Pose Initialization via Visual and Thermal Image Fusion. In Proceedings of the 73rd International Astronautical Congress (IAC), Paris, France, 18–22 September 2022. [Google Scholar]
  11. Colombo, A.; Civardi, G.L.; Bechini, M.; Quirino, M.; Lavagna, M. VIS-TIR cameras data fusion to enhance relative navigation during In Orbit Servicing operations. In Proceedings of the 73rd International Astronautical Congress (IAC), Paris, France, 18–22 September 2022. [Google Scholar]
  12. The Ultimate Infrared Handbook for R&D Professionals. 2023. Available online: https://www.flir.com/discover/rd-science/the-ultimate-infrared-handbook-for-rnd-professionals/ (accessed on 28 May 2023).
  13. Okada, T.; Fukuhara, T.; Tanaka, S.; Taguchi, M.; Imamura, T.; Arai, T.; Senshu, H.; Ogawa, Y.; Demura, H.; Kitazato, K.; et al. Thermal Infrared Imaging Experiments of C-Type Asteroid 162173 Ryugu on Hayabusa2. Space Sci. Rev. 2017, 208, 255–286. [Google Scholar] [CrossRef]
  14. Arai, T.; Nakamura, T.; Tanaka, S.; Demura, H.; Ogawa, Y.; Sakatani, N.; Horikawa, Y.; Senshu, H.; Fukuhara, T.; Okada, T. Thermal Imaging Performance of TIR Onboard the Hayabusa2 Spacecraft. Space Sci. Rev. 2017, 208, 239–254. [Google Scholar] [CrossRef]
  15. Brageot, E.; Groussin, O.; Lamy, P.; Reynaud, J.L. Experimental study of an uncooled microbolometer array for thermal mapping and spectroscopy of asteroids. Exp. Astron. 2014, 38, 381–400. [Google Scholar] [CrossRef]
  16. Lienhard, J.H., IV; Lienhard, J.H., V. A Heat Transfer Textbook, 5th ed.; Phlogiston Press: Cambridge, MA, USA, 2020; Available online: https://ahtt.mit.edu/ (accessed on 28 May 2023).
  17. Bianchi, L. Synthetic Thermal Image Generation towards Enhanced Close-Proximity Navigation in Space. Master’s Thesis, Politecnico di Milano, Milan, Italy, 2023. Available online: https://hdl.handle.net/10589/214838 (accessed on 21 March 2024).
  18. Bechini, M.; Lavagna, M.; Lunghi, P. Dataset generation and validation for spacecraft pose estimation via monocular images processing. Acta Astronaut. 2023, 204, 358–369. [Google Scholar] [CrossRef]
  19. Watanabe, S.; Hirabayashi, M.; Hirata, N.; Hirata, N.; Noguchi, R.; Shimaki, Y.; Ikeda, H.; Tatsumi, E.; Yoshikawa, M.; Kikuchi, S.; et al. Hayabusa2 arrives at the carbonaceous asteroid 162173 Ryugu—A spinning top–shaped rubble pile. Science 2019, 364, 268–272. [Google Scholar] [CrossRef] [PubMed]
  20. MacLennan, E.M.; Emery, J.P. Thermophysical Investigation of Asteroid Surfaces. II. Factors Influencing Grain Size. Planet. Sci. J. 2022, 3, 47. [Google Scholar] [CrossRef]
Figure 1. Infinitesimal part of the mesh face area projected on the direction of the center of the infinitesimal part of the pixel area and vice versa.
Figure 1. Infinitesimal part of the mesh face area projected on the direction of the center of the infinitesimal part of the pixel area and vice versa.
Applsci 14 05377 g001
Figure 2. Pixel area projection onto the sphere.
Figure 2. Pixel area projection onto the sphere.
Applsci 14 05377 g002
Figure 3. Angles and nomenclature for infinitesimal area of a mesh face and of a pixel.
Figure 3. Angles and nomenclature for infinitesimal area of a mesh face and of a pixel.
Applsci 14 05377 g003
Figure 4. Method workflow.
Figure 4. Method workflow.
Applsci 14 05377 g004
Figure 6. Detection camera model workflow.
Figure 6. Detection camera model workflow.
Applsci 14 05377 g006
Figure 7. Complete workflow for thermal image generation.
Figure 7. Complete workflow for thermal image generation.
Applsci 14 05377 g007
Figure 8. Tango geometry used for thermal simulation.
Figure 8. Tango geometry used for thermal simulation.
Applsci 14 05377 g008
Figure 13. Ryugu 800K model corrupted geometry region.
Figure 13. Ryugu 800K model corrupted geometry region.
Applsci 14 05377 g013
Figure 14. Original Ryugu 800K model (left) and with smoothing filter applied (right).
Figure 14. Original Ryugu 800K model (left) and with smoothing filter applied (right).
Applsci 14 05377 g014
Figure 15. Ryugu 800K model (left) and 3M model (right).
Figure 15. Ryugu 800K model (left) and 3M model (right).
Applsci 14 05377 g015
Figure 16. Ryugu 3M model corrupted geometry regions.
Figure 16. Ryugu 3M model corrupted geometry regions.
Applsci 14 05377 g016
Figure 18. Original Ryugu 3M model (left) and cleaned version (right).
Figure 18. Original Ryugu 3M model (left) and cleaned version (right).
Applsci 14 05377 g018
Figure 19. Ryugu temperature field rendering. In the right image, the object is rotated by 180° to show the other side of the asteroid. Light and shadows are inserted to better visualize the terrain; they do not represent any temperature feature.
Figure 19. Ryugu temperature field rendering. In the right image, the object is rotated by 180° to show the other side of the asteroid. Light and shadows are inserted to better visualize the terrain; they do not represent any temperature feature.
Applsci 14 05377 g019
Figure 23. VIS synthetic images (top) and respective TIR synthetic images (bottom). (a) VIS Frame 1. (b) VIS Frame 2. (c) VIS Frame 3. (d) TIR Frame 1. (e) TIR Frame 2. (f) TIR Frame 3.
Figure 23. VIS synthetic images (top) and respective TIR synthetic images (bottom). (a) VIS Frame 1. (b) VIS Frame 2. (c) VIS Frame 3. (d) TIR Frame 1. (e) TIR Frame 2. (f) TIR Frame 3.
Applsci 14 05377 g023
Figure 24. From left to right: visible spectrum, thermal infrared spectrum, fused image with IFEVIP method, and fused image with ADF method [2].
Figure 24. From left to right: visible spectrum, thermal infrared spectrum, fused image with IFEVIP method, and fused image with ADF method [2].
Applsci 14 05377 g024
Figure 28. Earth thermal image (left) and TANGO spacecraft with Earth in background plus noise model (right) [17].
Figure 28. Earth thermal image (left) and TANGO spacecraft with Earth in background plus noise model (right) [17].
Applsci 14 05377 g028
Figure 29. Region of interest extraction through weak gradient elimination and adaptive thresholding. Credits to L.Bianchi for the image.
Figure 29. Region of interest extraction through weak gradient elimination and adaptive thresholding. Credits to L.Bianchi for the image.
Applsci 14 05377 g029
Table 1. Emissivity values.
Table 1. Emissivity values.
Part NameEmissivity
Antennas0.86
Patch antennas0.90
Coil0.86
Support coil0.86
Panels0.40
Solar cell0.60
Table 2. Ryugu simulation stabilizes at 6M faces on the asteroid surface.
Table 2. Ryugu simulation stabilizes at 6M faces on the asteroid surface.
N FacesFace Size [m]Min [K]Max [K]Note
3.7M0.4−3401280Oscillation
4.3M0.3677800Oscillation
5.7M0.3340418Stable
6.4M0.3140416Stable
8.9M0.2642419Stable
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Quirino, M.; Lavagna, M.R. Spacecraft and Asteroid Thermal Image Generation for Proximity Navigation and Detection Scenarios. Appl. Sci. 2024, 14, 5377. https://doi.org/10.3390/app14135377

AMA Style

Quirino M, Lavagna MR. Spacecraft and Asteroid Thermal Image Generation for Proximity Navigation and Detection Scenarios. Applied Sciences. 2024; 14(13):5377. https://doi.org/10.3390/app14135377

Chicago/Turabian Style

Quirino, Matteo, and Michèle Roberta Lavagna. 2024. "Spacecraft and Asteroid Thermal Image Generation for Proximity Navigation and Detection Scenarios" Applied Sciences 14, no. 13: 5377. https://doi.org/10.3390/app14135377

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop