1. Introduction
The Earth Radiation Budget (ERB), which describes how the Earth gains energy from the Sun and how it loses energy to space at the top-of-atmosphere (TOA) [
1], is of crucial importance to strengthen our understanding of climate forcing and of the current and future climate change. In this ERB, the primary source of energy is solar radiation. The Earth’s albedo is about 30%, which means that approximately 30% of the incoming solar radiation is reflected by Earth. The remaining energy will be absorbed by Earth and re-emitted to space under the form of thermal radiation. Currently, due to an increasing amount of greenhouse gases in the atmosphere caused by anthropogenic activity, a small but non-zero net energy is absorbed by Earth, yielding an unbalanced ERB, which in its turn causes global warming. This is quantified by the so-called Earth Energy Imbalance (EEI), which is one of the most crucial parameters to be monitored in our pursuit to understand climate change [
2,
3,
4].
The earliest measurements of the ERB were made with a wide field-of-view (WFOV) radiometer, observing the Earth from limb to limb [
5]. This measurement principle has been adapted during the Earth Radiation Budget Experiment (ERBE) [
6], where the non-scanning WFOV radiometer was replaced by a scanning ERB radiometer. In the Clouds and the Earth’s Radiant Energy System (CERES) program [
7], only scanning radiometers were used, since these yield a higher spatial resolution than their non-scanning counterparts and, in practice, provide for a better accuracy than the ERBE-type WFOV radiometer [
8,
9].
A new concept to measure the radiative fluxes at the TOA is proposed in [
10], using a combination of a WFOV radiometer with WFOV camera systems. Instead of using a single instrument combining high accuracy and high spatial resolution, separate instruments are used in order to focus on both objectives separately. The first instrument is a WFOV radiometer, which aims to accurately measure the total Earth’s outgoing energy. Its estimated accuracy equals 0.44 W/m
, which is a 10-fold improvement over the NASA CERES instruments. The main innovation as compared to the ERBE-type WFOV radiometer is the introduction of a shutter, which is a standard practice for Total Solar Irradiance (TSI) radiometers [
11].
The radiometer is supplemented with high-resolution shortwave (SW, [400–1100] nm), and longwave (LW, [8–14] m) WFOV cameras. These cameras not only provide for a better radiometer accuracy, but also enable scene identification, while increasing the spatial resolution and enabling the spectral separation between Reflected Solar Radiation (RSR) and Outgoing Longwave Radiation (OLR).
This paper focuses on the SW camera, which aims to characterize the RSR with a spatial resolution of mininum 5 km, and that results in a clear-sky scene fraction of minimum 15% [
12]. We propose a conceptual design of this instrument (
Section 2) and we simulate its performance to characterize the RSR using radiative transfer simulations (
Section 3). A discussion on the results is given in
Section 4.
Section 5 closes this paper with a summary and anticipates on the future development of the SW camera prototype, as well as on the inclusion of a LW camera to achieve the overall scientific objective.
2. Optical System Design
This section focuses on the optical design of the SW camera.
Section 2.1 details the technical requirements and constraints that must be considered for the optical design, whilst
Section 2.2 gives the optical design, spot sizes, contrast, and aberrations evaluations.
2.1. Technical Requirements and Constraints
Four main requirements guiding the optical system design can be identified. (1) The Earth should be seen from limb to limb, from a nominal altitude of 700 km. (2) The camera should enable scene identification, (3) while measuring SW radiation, allowing to reconstruct the RSR on a stand-alone basis with an accuracy of minimum 5% [
10]. (4) The camera should have a resolution at nadir of maximum 5 km, in order to allow discriminating between cloudy and clear-sky scenes. An additional constraint is that the volume of the camera (optics and detector) should fit within one CubseSat Unit (1U). Moreover, we target a minimal amount of optical elements, while only using a limited number of aspherical surfaces in order to reduce the cost and ease the fabrication.
To observe the Earth from limb to limb, the FOV should be minimally 2 × 63.5°. Taking a margin for the altitude and pointing errors into account, we target a FOV of 2 × 70°, giving an image height of 2.46 mm and a focal length of 3.3 mm. Because it is easier to reduce the FOV than to enlarge it, a value of 2 × 70° is considered as an upper limit in our requirements.
We favor the use of a low-cost commercial-off-the-shelf (COTS) detector. In practice, we choose a CMOS sensor, which has a typical sensitivity between 400 and 1100 nm without color filter [
13]. In order to reconstruct the broadband sensor response (detailed in
Section 3.1), this sensor is equipped with Red (R), Green (G), and Blue (B) color filters, arranged in an RGGB Bayer pattern. For our purpose, we selected the Aptina MT9T031 detector, comprising 2048 × 1536 pixels of 3.2
m, on a rectangular area of 6.5 mm × 4.92 mm, and for which the space flight proven [
14] Gomspace NanoCam is commercially available. To image the Earth from limb to limb, we use a circular detector area comprising 1536 × 1536 pixels in a circle with a diameter of 4.92 mm.
For an equiangular WFOV lens, the size of a nadir pixel is 1.2 km. We target an Airy disk larger than 1 × 1 pixel in order to satisfy the Nyquist criterion, and smaller than 2 × 2 pixels to avoid losing too much spatial resolution. The effective nadir spatial resolution will be 2.2 km for = 900 nm. Because of its wavelength dependence, the F/# should be between 4.0 (when choosing = 650 nm, the central wavelength) and 2.9 (if = 900 nm, in the near-infrared). Finally, the goal is to minimise the Seidel aberrations, except for the distortion, which can be reduced by post-processing.
2.2. Optical Design of the Camera
The optical design of the SW camera is refractive and composed of 5 lenses, of which the last cyan two form an achromatic doublet (
Figure 1). This doublet enables to compensate for the chromatic aberrations, which might otherwise be present due to the large bandwidth of the system. The first two lenses are made of LAK14, whose high refractive index allows us to bend the incoming rays efficiently (
Table 1). The number of lenses and of aspherical surfaces are intentionally limited to reduce the cost of the optical design, and the Zemax OpticStudio®’s feature
High-Yield manufacturing [
15] has been used to design an as-built performance of the optical system. Only the front surface of the first lens and the back surface of the last lens are aspherical (
Table 1). Because these are the most crucial surfaces, they have the most influence on the performance of the optical design. Consequently, they have been made aspheric to ensure that all light is collected and all light is properly imaged on the image sensor. All lens materials were selected from the SCHOTT® catalog [
16].
During the optimization process, the effective focal length has been kept constant to 3.3 mm, such that the image height matches the projected full circular object (the Earth) on the detector. Lens thicknesses have been constrained between 2 mm and 5 mm for manufacturing purposes. The total axial length and the maximum diameter of the optical system have been constrained to 9 mm, so that the full camera would fit in 1U. The initial merit function considered the minimization of the RMS spot size. This was adapted to the contrast (or Modulation Transfer Function, MTF) when the spots were reasonably good, i.e., when the spot sizes approximately matched those of the Airy disks. When optimizing the contrast, the spatial frequency has been progressively increased up to 80 cycles/mm, where we expected a good MTF. Near the end of this optimization process, particular attention was paid to the improvement of the manufacturing yield, rather than considering only the best nominal performance. Additionally, finding the best aspheric surfaces has been made possible using the Zemax OpticStudio®’s feature
Find Best Asphere tool [
15].
In
Figure 1, the different colors correspond to the different fields between 0° and 70°. The full FOV is circular and equals 140°. To evaluate the performance of the camera system, we consider the spot diagrams shown in
Figure 2. The spot size is simulated for different fields between 0° and 70°, corresponding with the different colors in
Figure 1. In
Figure 2, the black circles correspond to the Airy disks. When optimizing the optical design, the aim is to match the spot size with the Airy disk, in order to obtain a near-diffraction-limited optical design.
Table 2 indicates that the spot sizes approximate the Airy disks for all fields, except for fields 5 and 6, where the spot sizes slightly exceed the Airy disk size, due to a larger amount of aberrations.
Regarding the Seidel aberrations, the major contribution comes from distortion (
Figure 3). Additionally, it appears that the distortion is mainly induced by the first lens surface, which is aspherical. While this surface enables us to avoid other aberrations, it is the main contributing factor to the distortion. This is because, as stated in
Section 2.1, there is no particular requirement on the distortion, since it can be measured during pre-flight characterization, and can be taken into account during the in-flight processing. Among the aberrations that are presented in
Figure 3, the dominant aberration is the distortion, which is a common aberration in wide-field imaging systems since this aberration generally increases with the field.
Figure 4 gives us a better understanding of the amount of distortion that is present in the optical system. It presents the total distortion (in %), showing that distortion is maximal at 70°, where it equals 74.6%.
While spots sizes are a good starting point to assess the image quality, we further optimized the Modulation Transfer Function (MTF) quantifying the spatial contrast, resulting in a polychromatic (between 400 nm and 900 nm) diffraction MTF ≥ 0.4 at 78 cycles/mm (
Figure 5). This indicates a good performance of the optical design, since the pixel size is 17
m, the MTF should be 0 at 156 cycles/mm.
4. Discussion
The state-of-the-art ERB measurements are provided by NASA’s CERES program [
7,
21]. The spatial resolution for CERES on the Terra and Aqua satellites equals 20 km at nadir. The CERES instrument on-board of TRMM achieved a resolution of 10 km. As TRMM was a precessing satellite, it sampled all viewing angle configurations, which makes it well suited for the development of ADMs [
19].
Our optical design of the SW camera shows a sufficiently good image quality to enable scene identification, while featuring broadband estimation by its large bandwidth. The targeted spatial resolution is 2.2 km at nadir, which is achieved for all RMS spots radii that are smaller than 3.2
m over the full FOV. A broader view can be obtained when considering the polychromatic diffraction MTF, which is better to assess the image quality. The performed radiative transfer simulations indicate that, when using the SW camera on a stand-alone basis, the broadband albedo can be estimated with a random error lower than 3% across all simulated scene types and all solar-zenith angles, which is better than the design target of 5% that was put forward in [
10]. The next step will be the realization of a prototype of this SW camera. During this step, the detector spectral response should be experimentally determined, and the estimate of the accuracy of the broadband albedo estimate should be updated accordingly.
While conducting a tolerance analysis that accounts for manufacturing imperfections is of major importance before starting the fabrication and assembly of a prototype, we do have confidence in our conceptual design for the following reasons. First, the optical design has only two aspherical surfaces and it uses conventional materials. Second, the optical design was made with the most stringent constraints (largest allowed FOV, smallest allowed F/#, RMS spots radii of the size of one pixel for the largest wavelength). These constraints can still be relaxed. The FOV could be relaxed to e.g., 135°, while the F/# could be relaxed to 4.0 using 650 nm as central wavelength. Depending on the altitude, the FOV can also be relaxed, as long as the camera observes the Earth from limb to limb. In order to allow separating clear-sky and cloudy pixels, the targeted nadir spatial resolution is of the order of 5 km. With a nadir spatial resolution of 2.2 km, our optical design beats this requirement by far.
In addition, it will also be important to perform a full stray-light analysis. Scattering may occur because of the imperfect transmission and surface roughness of optical elements. However, a preliminary analysis has shown that, considering their respective thicknesses, all the considered optical elements have a transmission of more than 90% over the full spectral width, and that this transmission increases to more than 99% starting from 470 nm. The back-reflection will thus be very weak, while an anti-reflection coating can reduce the stray-light even further. Additionally, a path analysis was performed using the non-sequential mode of Zemax OpticStudio®, enabling to study the light transmission through the camera system while taking into account material absorption and light reflection at the lens interfaces. This analysis indicated that almost 97% of the light reaches the detector.
Recently, the Libera mission has been selected by NASA in the framework of the Earth Venture Continuity [
9,
22]. The mission carries a monochromatic WFOV camera, of which full specifications are currently still unknown. However, in comparison to this camera, which is mainly used for scene identification, our camera also enables a broadband estimation of the shortwave radiation, or reflected solar radiation, owing to its large bandwidth.
We also foresee the development of a LW WFOV camera, targeting the estimation of the thermal radiation [
10]. The ensemble of the three WFOV instruments: the radiometer, the SW camera, and the LW camera, will form a compact and relatively low cost payload, suitable for integration on nano- or micro-satellites. Such small satellites can be used to supplement CERES and its follow-on mission Libera, e.g., for improving the sampling of the diurnal cycle. In this context, it is particularly relevant, since currently no follow-on mission for the sampling of the ERB from the morning orbit is foreseen after the end of life of the CERES instrument on the Terra satellite, which is expected around 2026.
5. Conclusions
We proposed to monitor the Earth Radiation Budget with a suite of compact space-based instruments, adequate for the integration within a nano- or micro-satellite. These instruments are a wide field-of-view radiometer, a shortwave camera, and a longwave camera. The core instrument, which was the object of a previous study [
10], is a wide field-of-view radiometer aiming to measure the incident solar energy, as well as the Earth’s total outgoing energy, with an accuracy of 0.44 W/m
. To supplement this low resolution radiometer, we propose to use wide field-of-view high resolution shortwave and longwave cameras. These cameras will allow:
separating the shortwave and the longwave radiation;
increasing the spatial resolution; and,
performing a scene identification, in particular by discriminating cloudy from clear-sky scenes.
In this paper, we have described our optical design for the wide field-of-view shortwave camera. It consists of three singlet lenses and an achromatic doublet. The full field-of-view equals 140°, enabling to observe the Earth from limb to limb. Using 1536 × 1536 pixels of 3.2
m of the Aptina MT9T031 detector, the nominal spatial resolution equals 2.2 km at nadir, beating the 5 km requirement. Barrel distortion appears to be the main aberration. It is maximal at 70°, where it equals 74.6%. At 900 nm, the optical design is nearly diffraction-limited. A further assessment of the image quality is done by characterizing the polychromatic diffraction Modulation Transfer Function, exceeding 0.4 at 78 cycles per mm. Therefore, we are confident that the wide field-of-view shortwave camera achieves adequate optical performance. The ability to estimate the RSR from the camera spectral measurements was assessed using radiative transfer simulations. Because the precise spectral response of our detector is currently unknown, a synthetic spectral response has been simulated using a classic CMOS RGB response. The incoming and outgoing spectral fluxes have been simulated for different reference scenes using the libRadtran software [
17]. Following these radiative transfer simulations, the estimated stand-alone accuracy of the broadband albedo estimate from the SW camera is 3%, which is better than the 5% requirement.
The next steps will include the conceptual design of the WFOV LW camera and the realization of laboratory prototypes of all three instruments.