Next Article in Journal
Evaluation of the Influence of Shaft Shape Errors on the Rotation Accuracy of Aerostatic Spindle—Part 1: Modeling
Previous Article in Journal
Novel Radiation-Hardened High-Speed DFF Design Based on Redundant Filter and Typical Application Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review for Examining the Oxidation Process of the Moon Using Generative Adversarial Networks: Focusing on Landscape of Moon

1
Department of Computer Engineering, Sunchon National University, Suncheon 57992, Korea
2
Department of Data Informatics, (National) Korea Maritime and Ocean University, Busan 49112, Korea
3
Department of Data Science, (National) Korea Maritime and Ocean University, Busan 49112, Korea
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(9), 1303; https://doi.org/10.3390/electronics11091303
Submission received: 11 March 2022 / Revised: 15 April 2022 / Accepted: 15 April 2022 / Published: 20 April 2022
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Japan Aerospace Exploration Agency (JAXA) has collected and studied the data observed by the lunar probe, SELenological and ENgineering Explorer (SELENE), from 2007 to 2017. JAXA discovered that the oxygen of the upper atmosphere of the Earth is transported to the moon by the tail of the magnetic field. However, this research is still in progress, and more data are needed to clarify the oxidation process. Therefore, this paper supplements the insufficient observation data by using Generative Adversarial Networks (GAN) and proposes a review paper focusing on the methodology, enhancing the level of completion of the preceding research, and the trend of examining the oxidation process and landscape of the moon. We propose using Anokhin’s Conditionally-Independent Pixel Synthesis (CIPS) as a model to be used in future experiments as a result of the review. CIPS can generate pixels independently for each color value, and since it uses a Multi-Layer Perceptron (MLP) network rather than spatial convolutions, there is a significant advantage in scalability. It is concluded that the proposed methodology will save time and costs of the existing research in progress and will help reveal the causal relationship more clearly.

1. Introduction

In recent years, research has been conducted by applying Artificial Intelligence (AI) to space information. In particular, studies of the landscape and topography using the datasets obtained from imaging equipment are an issue. Thus, this author’s research team has proposed a review paper on the topic of generating datasets using Generative Adversarial Networks (GAN) to examine the oxidation process of the Moon for the point in time when the Korea Aerospace Research Institute (KARI) prepares for the launch of Korea Pathfinder Lunar Orbiter (KPLO) in August 2022.
The iron on the surface of Mars turned into iron oxide because of water and oxygen several billions of years ago. The reason that Mars is orange-colored is because it has rusted due to that water and oxygen combination, while the Moon looks gray due to the absence of water and oxygen on its surface. Recently, a research finding was announced stating that the surface of the Moon is rusting [1].
Lunar exploration is primarily concerned with methods of digging up resources buried underground or on the surface of the Moon and utilizing them as an energy source or material for processing. The research on utilizing the Moon and space is ongoing. Securing baseline data to establish a policy through accurate and rapid analysis of soil, climate change, and marine erosion by using the oxidation process of the Moon is warranted. Thus, this research is a review on the subject of generating a dataset by using GAN to examine the oxidation process of the Moon, focusing on the lunar landscape.
Meanwhile, in 2008, the Indian space research agency launched a lunar probe, Chandrayaan-1. The lunar exploration goal of Chandrayaan-1 was to produce a map of the ice and minerals that might be on the lunar surface and underground.
Chandrayaan-1 was equipped with the Moon Mineralogy Mapper (M3), produced by U.S. National Aeronautics and Space Administration’s (NASA) Jet Propulsion Laboratory (JPL). Researchers made the distribution chart of minerals by measuring the lunar surface using M3 for nearly a year. Shuai Li discovered hematite from the observed data. Li and his colleagues found out that hematite was directly reduced again by the hydrogen in the solar wind [2]. As for the protection element, when the moon is at the tail of the Earth’s magnet, the influx of solar wind is reduced to below 1%. Since hematite is an ore generated as iron rusts, for iron to rust, water and oxygen are needed. The location where hematite was discovered is the high-altitude region of the Moon, close to 80 degrees. This is the polar region of the Moon, known as the permanently shadowed crater, and it is also the place where ice was discovered a few years ago. Therefore, there is a possibility that oxygen-containing water created the hematite, while even dust particles colliding with the moon after floating from outside, may have retained water. Oxidation is a reaction involving oxygen, and if there is no oxygen, it cannot occur. Moreover, there is a possibility that it originated from water on the lunar surface. The fact that water exists on the moon was verified through Chandrayaan-1 in 2008. In 2009, NASA also intentionally made a spacecraft collide with the lunar surface and examined the substances protruding in its aftermath. As a result, it confirmed that a considerable amount of water exists in the polar region of the moon. Since water exists in all areas of the lunar surface, hematite is also distributed evenly [3]. More precisely, moisture exists from about 8 cm underneath the ground surface, and the moon has no atmosphere, so there is no wind. Therefore, the moisture existing 8 cm below the surface has no reason to show up on the surface.
A meteor is the phenomenon of substances such as dust left over from the solar system formation process being drawn in by the Earth’s gravity [4]. Although it is very fast, it all burns up due to friction from the atmosphere. If a meteor collides with the surface of the Moon, which has no atmosphere, it digs up the soil. With some meteors, moisture that is underneath the surface comes up, and this, combined with oxygen, causes the iron existing on the surface to rust. However, this does not mean there is absolutely no atmosphere on the moon [3]. Argon, helium, neon, natrium, potassium, and hydrogen all exist in small amounts, but oxygen is not included.
In 2007, Japan Aerospace Exploration Agency (JAEA) launched a lunar probe called SELenological and ENgineering Explorer (SELENE). The scientists who collected and studied the data observed by SELENE for 10 years discovered that the oxygen existing in the upper atmosphere of the Earth escapes to outer space by being carried by the tail of the magnetic field. Here, the Earth’s magnetic field has a tail of the magnetic field which looks similar to meteors on the opposite side of the sun. These magnetic tails are very active, generate massive changes, and supply energy to ions and electrons. Oxygen moves in the vacuum space with 385,000 km on a magnetic tail. This is the reason that the lunar surface fixated towards the Earth is rusting. For this reason, the hematites are concentrated on the lunar surface facing the Earth. However, this research is still ongoing, and more data are needed to examine the oxidation process [5,6].
Most of the studies in the field of computer vision on the moon have been works focusing on detecting the surface of the moon. On the other hand, in this paper, a survey was conducted of lunar data augmentation experiments to supplement the characteristics of the aerospace field, where the number of data are small, and data collection is difficult. However, as most of the studies were focused on detection, it was difficult to find direct studies using GAN. Therefore, the existing methods applied to the moon and the overall flow of GAN were investigated, with a focus on thinking about how to apply GAN, which is very popular in the field of computer vision, to the moon.
This paper is a review paper on GAN and machine learning applications in the aerospace industry and aims to reinforce the ideas of experiments that can be carried out in another future study. We have four contributions: (1) a basic explanation of GAN is given. (2) The developments in the GAN field related to future studies are classified and explained. (3) The way that machine learning technology is being applied in the aerospace industry is introduced. (4) We propose future research applying GAN to the aerospace industry for data augmentation.
Meanwhile, the GAN model intended for use in future study training is Anokhin’s Conditionally-Independent Pixel Synthesis (CIPS). The initialization of weighted values optimized for observation data, parameter updating method, and combination of active functions are verified through an experiment [7,8]. Ensemble techniques may be learned as the occasion warrants [9].
Section 2 describes aerospace and GAN-related studies, and Section 3 describes future research methods applying Anokhin’s proposed CIPS. Section 4 explains trends in AI research in the aerospace industry, and Section 5 discussed the conclusions of this paper.

2. Related Studies

2.1. Conducting Aerospace Mission Using a Probe

The KARI is preparing the launch of KPLO in August 2022. KPLO is set to be launched through SpaceX’s Falcon 9, from Cape Canaveral Air Force Station with the mission of filming the lunar surface, finding space resources, and streaming videos to Earth. Space resources are iron and helium-3, which are clean energy sources including water and oxygen. Helium-3 is one of the light and stable isotopes of helium, which has two protons and one neutron. On Earth, helium-3 exists in amounts of one millionth of helium-4.
The lunar probe is equipped with 6 types of payload, including KARI’s high-resolution camera (Lunar Terrain Imager, LUTI), Korea Astronomy and Space Science Institute’s Wide-Angle Polarimetric Camera (PolCam), Kyunghee University’s KPLO Magnetometer (KMAG), Korea Institute of Geoscience and Mineral Resources’ KPLO Gamma-Ray Spectrometer (KGRS), Electronics and Telecommunications Research Institute’s Delay/Disruption Tolerant Network (DTN) verifier, and NASA’s Shadowcam. Five payloads were developed with Korean technology, and one was developed with overseas technology. LUTI films the major regions of the lunar surface, and PolCam takes the polarized light image of the surface of the moon with a high resolution of approximately 100 m. KMAG will measure the minute magnetic field around the Moon, and KGRS will perform spectroscopic observations of gamma ray particles of the lunar surface. The DTN verifier will verify the space–internet communication technology between Earth and lunar probes, and shadow cam will observe the south pole of the moon where ice is expected to exist. Meanwhile, NASA changed the trajectory of the shadow cam, believing it may not be able to film the lunar surface image as planned. Figure 1 illustrates the image of the lateral magnetic field of the Sun restored with GAN [10,11].
Figure 1a depicts an extreme ultraviolet image, Figure 1b shows the lateral magnetic field image generated by GAN, and Figure 1c shows a magnetic field image observed by the satellite. Figure 1a,c were acquired in 3-day intervals. As a result of analyzing the front magnetic field image and the magnetic field image synthesized by GAN, it was verified that the solar spot was adequately reproduced.
In addition to directly carrying out a mission, Divish Rengasamy et al. broadly divided up models used in the research for maintenance, repair, and overhaul of deep autoencoders, long short-term memory, Convolutional Neural Networks (CNN), and deep belief networks through the status data of the space probe. They also examined the concept and description of each structure, as well as the problem of using them. Furthermore, they identified the situation of the current research and explained the direction of follow-up research [12].

2.2. Case That Applies Machine Learning Technology to the Moon

Since the moon is a satellite revolving around the Earth, it has been the representative subject of astronomical research. Due to the recent re-examination of machine learning technology, various techniques are even being applied to the Moon. This section lists studies related to oxidation and the landscape features of the moon (lunar landscape, craters, etc.) related to the proposed method, along with many areas where machine learning can be applied to the Moon. DeLatte et al. listed the challenges of planetary geologists and machine learning studies, along with the recent results from the field of automatic crater detection, which used machine learning technologies and presented recommendations for better automation [13]. Lee Hoonhee listed the recent deep learning studies regarding crater recognition on the moon and discussed the limitations of research on crater recognition [14].
Jia et al. said that the existing Crater Detection Algorithm (CDA) creates cases where detection is impossible if a crater is small or overlapping. Thus, the authors suggested that channel-wise attention, using multi-path representation and self-calibrated convolutions, is prosperous. They proposed split-attention networks with self-calibrated convolution structure that can generate discriminative feature representations [15]. Silburt et al. proceeded with an experiment for detecting the location and size of craters in digital elevation maps of the moon using CNNs. As a result of the experiment, 92% of the test data that humans created was restorable [16]. Yutong Jia et al. proposed CDA based on U-Nets and achieved crater detection accuracy of 93.4% through repeated training, performed 5000 times [17]. Ali-Dib et al. used Mask R-CNN, an “instance segmentation” general framework, to extract 2D shapes in lunar digital elevation maps. As a result, they achieved the outcome of identifying 87% of the craters [18]. Chen et al. proposed a High-Resolution-Moon-Net by adopting transfer learning that can simultaneously and automatically identify craters and rilles, along with state-of-the-art machine learning technology such as the High-Resolution Net to facilitate the discovery of lunar energy [19].
Wilhelm et al. used CNNs to analyze the lunar surface without labeling, then found significant representative regions as a result of the experiment and proved that the label of the applicable region can be derived with a fully unsupervised method [20]. Roy et al. presented a U-Net-based deep learning model which restores missing pixels of a lunar surface image using the context-aware fashion method known as Image inpainting [21]. Lesnikowski et al. experimented with detecting the landing sites of Apollo 15 and 17 by using a variational autoencoder to show that lunar surface anomaly detection can be performed unsupervised without sufficient representations. In addition, they mentioned that unsupervised data density estimations can be expanded to various tasks, including locating lunar resources [22].
Xia et al. used Deep Neural Networks (DNN) that can better depict nonlinear relationships, not the traditional regression method usually used in the past, to draw the abundance map of 6 major oxides of the moon and magnesium [23]. Table 1 lists research cases that have applied machine learning to the Moon.

2.3. Basic Idea of GAN

In the 2014 paper, Ian Goodfellow, who proposed GAN, explained it by likening it to a police officer and a money counterfeiter. A money counterfeiter created money that resembles real money and worked hard to deceive the officer. The officer aims to arrest the money counterfeiter by fully discerning (classifying) real money from counterfeit money. This process can be said to be a minimax game where a money counterfeiter (Generator) and the police officer (Discriminator) repeatedly create and classify counterfeit money. GAN is embodied by two neural network systems, where the two networks, a Generator ( G ) and a Discriminator ( D ) , compete. The G converts random noise into a sample that looks real, and the D discerns whether the input sample is authentic or something synthetic created by the G .
The G is trained to generate images similar to real images, and D is trained to distinguish between the ground-truth image and data generated by G . Equation (1) is the objective function of GAN used for training. This allows the two networks of G and D to compete and find a Nash equilibrium.
min G   max D   V ( D , G ) = E x P d a t a ( x ) [ log D ( x ) ] + E z P z ( z ) [ log ( 1 D ( G ( z ) ) ) ]
where the value of this V ( D , G ) is calculated as a probability value, and this equation is described in terms of G and D as follows:
  • D : The input ground-truth image x results in a high probability value as the log value increases, and the input fake image G ( z ) results in a low probability value as the log value decreases. That is, D is updated to distinguish between the ground-truth image and the fake image generated by G .
  • G : A fake image is with noise z is generated following a specific distribution (e.g., Gaussian distribution). When the generated fake image G ( z ) is put in D , it is trained so that the probability is high, in a similar manner to the ground-truth image. In other words, when the D ( G ( z ) ) value is increased, and the overall probability value is lowered, in short, G is updated to generate an image that D cannot clearly distinguish.
When training GAN, the two networks of G and D are not trained at the same time. A method of updating the weights of the remaining networks is used while one network weight is frozen.
In the end, the target of the ground-truth image is ‘1’, and the target of the generated image is ‘0’. The ground-truth image outputs a value close to ‘1’ and the generated image outputs a value close to ‘0’ [6].
Meanwhile, since there is no way to tell on which point the ground-truth image is mapped within the latent space, it is more difficult to train the G . If the output of the G is input into the D , the probability of it being genuine is output. This probability is the output of GAN, and the input is a randomly generated latent space vector of the dimension. The output creates a training batch which is ‘1’, and trains GAN. The closer the output gets to ‘1’, the more of a genuine-looking sample can be generated. Figure 2 below illustrates the GAN model.

2.4. Variants of GANs

In this section, we will explain various variants of GAN. It is largely divided into architecture, stability, and image-to-image translation. Architecture includes studies that apply new concepts to the models or changes model structures. Stability includes studies on improving instability during training, which is one of the challenges with GANs. Finally, the image-to-image translation includes models that perform the image-to-image translation task. Table 2 summarizes the models (methods) described in this section.

2.4.1. Architecture

Conditional GAN (CGAN) is a variation model of GAN that has added the conditional nature by adding some auxiliary information ‘ y ’, such as a class label or other modality data, to a generator and a discriminator. Mehdi Mirza et al. showed the usefulness of the conditional nature and the possibility of application through CGAN and implied that, through this, a detailed analysis of each performance and characteristic would be possible [24]. By adopting the information theory concept to GAN, Chen et al. proposed InfoGAN, which can learn disentangled representations from completely unsupervised learning. InfoGAN achieves its goal by maximizing the mutual information between the fixed small subset of the noise variable of GAN and relatively simple observations [25].
Meanwhile, Junbo Zhao et al. proposed Energy-Based GAN (EBGAN), which introduced the concept of energy to GAN. Based on EBGAN data, low energy is assigned to the part or space where data exist in the data manifold, and high energy is assigned to the rest of the surroundings [26]. In other words, if Y is the correct answer of X from ( X ,   Y ) , low energy values are allocated to ( X ,   Y ) , and if not, high energy values are allocated. When this is analyzed from the perspective of unsupervised learning, it can be said to be performing modeling, so that small energy is allocated to X from the data manifold.
Radford et al. studied Deep Convolution GAN (DCGAN), with its advantage even in the area of unsupervised learning, by fusing CNN, which is used to distinguish itself from existing supervised learning, with GAN [27]. DCGAN prepared the base where GAN can be applied to computer vision as CNN was applied and was able to stably train GAN. As another method, Han Zhang and colleagues proposed Self-Attention GAN (SAGAN), capable of performing attention-centered long-range dependency modeling to generate images by showing attention to GAN. Unlike the existing GAN, which can generate detailed information only from local points, SAGAN can generate detailed information from all locations, allowing the discriminator to verify whether the details from the part distant from an image match each other [28].
Tero Karras and colleagues presented Progressive Growing of GANs (PGGAN) which applies a new training methodology based on the idea that the generator and discriminator are gradually learned from existing GANs. The method of gradual training is initially begins learning with a Low-Resolution (LR) image, and then the resolution is gradually raised by adding layers as learning progresses. The authors showed that training was carried out more stably and that training time was reduced by adopting a gradual learning methodology [29].
Gurumurthy et al. proposed GAN for Diverse and Limited Data (DeLiGAN) as a Gaussian mixture model that can be trained using limited training data by adding reparameterization from latent space. They also mentioned the fact that conventional GAN needs large amounts of diverse training data, even though GAN is tremendously successful in the area of image generation [30].
Andrew Brock and colleagues proposed BigGAN, a massive GAN, to generate complicated data sets such as ImageNet beyond high-resolution images. They proposed a simple structural change that improves scalability and conditioning by modifying regularization, while proposing a GAN that is larger than the latest technology. They also introduced a “truncation trick” that can control the trade-off between sample variety and fidelity due to such changes [31].
Tero Karras et al. proposed Style-based GAN (StyleGAN), a structure of GAN that has adopted the style transfer concept from the PGGAN structure. This model causes stochastic changes in high-level properties and generated images that GANs automatically learned, enabling the control of intuitive and scale-specific integration (synthesis) [32]. After a year, the authors analyzed the unique structure of StyleGAN, a state-of-the-art data-driven unconditional generative image modeling up to that point, while presenting methods for solving problems. In particular, they proposed a method of redesigning and regularizing the normalization of the generator to prevent noisy water droplets due to instance normalization in StyleGAN [33].
Meanwhile, Ivan Anokhin and colleagues presented a new model based on the structure of StyleGANv2 for an image generator, where the color value of each pixel is independently calculated by reflecting the random latent vector values and coordinates of a specific pixel and is not heavily dependent on convolution. This is possible because CIPS, a model proposed by Anokhin and selected by us for moon data augmentation task in future research, generates data for each pixel and uses Multi-Layer Perceptron (MLP). Therefore, it is thought to be scalable for various data conditions [34].

2.4.2. Stability

Mescheder et al. discovered elements that have a negative impact on the convergence of an algorithm while analyzing general algorithms for training GAN. To solve this problem, they found the reason why gradient ascent is unable to find the local Nash equilibrium and used this information to design an algorithm for finding the Nash equilibrium. They have empirically proven that GAN can be stably trained by using the proposed algorithm [35].
Bousmalis et al. proposed a GAN-based unsupervised Pixel-Level Domain Adaptation (PixelDA) method, which is one of the unsupervised domain adaptation algorithms that learns how to map representations between domains and how to extract domain-invariant features. PixelDA can obtain advantages by decoupling from the task-specific architecture, training stability, data augmentation, and interpretability, by comparing with other existing methods [36].
Martin Arjovsky and colleagues defined the Earth Mover (EM) distance to improve the instability of the GAN training process and proposed Wasserstein GAN (WGAN), a model that applies EM distance to GAN [37]. However, even WGAN has a problem of still generating bad samples or of not guaranteeing convergence. This is due to weight clipping being adopted to satisfy the Lipschitz condition. To solve this problem, Ishaan Gulrajani and colleagues presented a method of applying a penalty to the norm of a critic’s gradient for input as an alternative to weight clipping [38].
Berthelot et al. presented a method of adding the equilibrium term to match the loss derived from Wasserstein distance and the autoencoder’s loss distribution, as well as to maintain the equilibrium of the generator and discriminator to train the autoencoder-based GAN. If this method is used, it enables faster and more stable training, and can obtain a result with high visual quality [39]. Takeru Miyato et al. tried to stabilize the training of the discriminator by applying a new weight normalization method called spectral normalization [40].
Xudong Mao et al. introduced Least Squares GAN (LSGAN), using least squared as a loss function to solve the problem of gradient loss caused by using sigmoid cross-entropy as the loss function of the discriminator from the existing GAN. This model can generate images with better quality than the existing GAN and can operate more stably during the training process [41].
Martin Heusel et al. proposed the Two Time-scale Update Rule (TTUR) to train GAN with stochastic gradient descent regarding the loss function of GAN, and to prove the convergence of GAN training that has not been proven up to that point. They proved that TTUR has a respective learning rate for the generator and discriminator, and that TTUR converges to a fixed local Nash equilibrium under several assumptions, using stochastic approximation theory [42].
Salimans et al. introduced five technologies that enable GAN to converge, especially semi-supervised learning and the image generator. They explained that, if feature matching, minibatch discrimination, historical averaging, one-sided label smoothing, and virtual batch normalization introduced in this paper are used, it helps in understanding non-convergence problems, improves the performance of semi-supervised learning, and generates a better image [47]. Then, Mario Lucic et al. empirically proved that everything can be converged in Frechet Inception Distance (FID) if sufficient calculations and time are given. They compared the latest GANs with a fair method, then evaluated the exemplarity of FID regarding the use and dropping of other encoding networks, and thus provided the optimal estimation of FID that can be achieved from the FID data set. Additionally, to compare GANs, they mentioned the need for approximate methods of the result rather than the outcome itself, due to the randomness of the optimization process and model instability [48].

2.4.3. Image-to-Image Translation

Since the existing GAN cannot determine the domain relations when unpaired data are given, Taeksoo Kim et al. introduced Discover Cross-Domain Relations with GANs (DiscoGAN) to solve this. Unlike the previous method, DiscoGAN can be trained with two sets of images without explicit double labels and does not need preliminary training. The feature of this model is that it limits so that all images of a domain are expressed with the image of another domain. The authors recommend handling the mapping between two domains from both ways by using such attributes [43].
Isola et al. proposed pix2pix, which applies CGAN for the versatility of image-to-image translation. Pix2pix learns mapping both from an input image to an output image, and the loss function to train the map. With this, good results can be obtained without the hand-engineer of mapping and the loss function [44]. Image-to-image translation usually aims to learn mapping between input and output images using training data of an aligned image pair. Zhu et al. proposed CycleGAN that can translate an image from source image X to target domain Y if there is no image pair. To perform this translation, loss of F ( G ( X ) ) = X was introduced by combining mapping   G : X Y and inverse-mapping F : Y X [45].
Choi et al. pointed out that the drawback of the existing method is attributed to limited scalability because the existing image-to-image translation only shows a result for two domains, which is a one-on-one situation. To compensate for this, they proposed StarGAN, which complemented scalability that can perform image-to-image translation for various domains using a single model [46].

2.5. Cases Related to Aerospace Missions Using GAN

Taeyeong Kim in Korea successfully restored the lateral magnetic field image of the Sun with image-to-image translation based on CGAN [11]. Image-to-image translation is a task in which images from one domain transform into another domain.
The magnetic field image of the sun is important sensing data concerning solar activity and space weather forecast. The lateral situation of the rotating sun provides useful information with respect to forecasting. The front magnetic field image of the Sun, as viewed from the Earth, can be obtained with the Helioseismic and Magnetic Imager (HMI) of Solar Dynamics Observatory (SDO). However, it is difficult for the stereo observatory observing the side of the Sun to acquire an image since it has no sensor.
The sensor image of SDO’s Atmospheric Imaging Assembly (AIA) and HMI video was taught to GAN in pairs. SDO is a satellite observing the front side of the Sun and HMI is a magnetic field sensor. Afterward, the Extreme Ultraviolet Imager (EUVI) sensor stereo image was input as a condition. Notably, EUVI is a sensor that has the same characteristics as AIA.
Studies using GAN are being actively performed in pan-sharpening, or super-resolution, which generates high-definition images using low-definition ones. Even though Jiayi Ma et al.’s CNN-based pan-sharpening method recently achieved state-of-the-art status, it is seen as supervised learning with drawbacks in disregarding the space information of panchromatic images. The authors proposed a framework of a GAN-based unsupervised learning method called Pan-GAN to compensate for these drawbacks. The proposed model achieved significant results in comparison to existing methods [49]. Qingjie, Liu et al. proposed PSGAN from the perspective of generative adversarial learning for pan-sharpening. PSGAN consists of the generator and discriminator just like the existing GAN. The generator receives panchromatic and multi-spectral images as input and is designed to map High-Resolution (HR) multi-spectral images; the discriminator conducts training on an adversarial relationship by distinguishing this [50].
Kui Jiang et al. proposed the GAN-based Edge-Enhancement Network (EEGAN) to reconstruct an HR satellite image. EEGAN consists of two partial networks—i.e., the Ultra-Dense SubNetwork (UDSN) and Edge-Enhancement SubNetwork (EESN). The result generated by UDSN generates intermediate results containing noise and subsequently generates HR images by undergoing an additional process through EESN [51]. Jakari Rabbi and colleagues proposed the new Edge-Enhanced Super-Resolution GAN (EESRGAN), inspired by the existing EEGAN and the Enhanced Super-Resolution GAN (ESRGAN). EESRGAN broadly consists of three sections—i.e., ESRGAN, the Edge-Enhancement Network (EEN), and the detection network. ESRGAN and EEN both raise the resolution of a satellite image, and then detect objects using the detection network [52].
Yiting Tao et al. proposed a network consisting of two streams to enhance the performance of HR satellite images. The proposed network is the mainline, a residual network structure for panchromatic images and the Stacked Convolutional Auto Encoder (SCAE), which extracts features from multi-spectral images and then supplements the mainline. The authors proved that excellent results can be obtained with a deeper network and more kernels [53]. Then, Yuanfu Gong et al. proposed Enlighten-GAN for the Super-Resolution (SR) of a satellite image. The proposed model includes enlighten blocks for generating two-fold upsampling results. Since enlighten blocks can receive effective gradients and learn high-frequency information, high generalization capabilities can be obtained. Therefore, the authors showed that a more realistic and natural HR image can be generated using the proposed Enlighten-GAN than by existing methods [54].
Wen Ma et al. proposed the SR method for remote sensing images based on Transferred GAN (TGAN). Unlike the method of previous GAN-based SR, TGAN is different in that it uses the transfer-learning method for removing batch normalization layers and solving problems of lacking data. Based on this difference, it showed a far better performance than the existing Super-Resolution Convolutional Neural Network (SRCNN) and Super-Resolution Generative Adversarial Network (SRGAN) [55]. Table 3 lists research cases that have applied GAN techniques to the aerospace missions.

3. Research Methodology

It has taken 10 years to announce the theory that the Moon’s oxygen came from the tail of the Earth’s magnetic field, and related studies are still ongoing. While research time can be reduced by hiring high-quality talents and deploying exploration satellites, the costs can be astronomical. This can be observed just by looking at exploration satellites as an example. SpaceX invested about 500 billion KRW (417 million USD) into developing the Falcon 9 and the Korea Aerospace Research Institute invested approximately 230 billion KRW (192 million USD) into the development of KPLO. GAN would contribute to solving the above-mentioned problems and would be more cost-effective than hiring high-quality talents and deploying exploration satellites. GAN-based applications are already used in many industrial areas [56]. If GAN is used, observation data could be increased.
In GAN, two networks (i.e., the G and the D ) compete. Network G generates realistic samples from noise, while D distinguishes whether an input sample is ground-truth or a fake.
GAN can synthesize a sample that is difficult to distinguish from an actual sample [57]. It can model a language or compose music [58,59]. This is even possible with a small amount of data, and since this can minimize the limitations of human resources and capital, it is a suitable alternative. The goal is to find the compound distribution of a sample that resembles an actual distribution of observation data. The GAN model to be used in data augmentation is CIPS. CIPS synthesizes an image with quality equivalent to a state-of-the-art StyleGANv2 [34]. However, unlike StyleGANv2, CIPS does not use spatial convolutions and can generate color values for each pixel. In addition, it generates values for each pixel, so it has advantages over conventional methods in terms of scalability. The generator structure of CIPS is illustrated in Figure 3.
The top of Figure 3 illustrates the generated pipeline, and the bottom is the structure of the Modulated Fully-Connected (ModFC) layer. Mod is the modulation of values, and DeMod is the inverse operation of Mod.
CIPS is a model that generates pixel values independently for each pixel by receiving coordinate values of pixels instead of using spatial convolution with spatial constraints in the existing image generator. When generating an image, CIPS uses random vector z shared by all pixels and creates style vector w using Mapping function M . This part is the same as StyleGAN2 and is used as a baseline.
CIPS used a sine function to generate Fourier features only in the first layer for positional encoding. Other layers use a Leaky Rectified Linear Unit (LeakyReLU) function. However, when only Fourier features were used, wave-like artifacts occurred. As a result, coordinate encoding for each coordinate was learned, and Fourier features and coordinate embeddings were concatenated and used. Through the fact that each pixel is independent without using spatial information, it has scalability, and is able to generate an image such as a panorama.
Table 4 illustrates a CIPS’s generator trained with various datasets.
The latent vector of Figure 3 is input into the mapping network through normalization. The mapping network outputs are shared by all pixels of an image. ModFC receives modulated style vector w and the output from previous layers to modify learnable weights and bias. The encoder encodes the coordinates of each pixel using Fourier features and coordinate embedding. FC returns the encoded value as the RGB value of the relevant pixel. The active function of ModFC uses LeakyReLU and is represented as formula (2) [60].
a i , j , k = max ( z i , j , k ,   0 ) + λ ( z i , j , k ,   0 )
In Formula (2), hyper-parameter λ has the range from 0 to 1. LeakyReLU compresses the negative value part without mapping it with a constant 0, such as the Rectified Linear Unit (ReLU). Even if a function is inactive, a positive number smaller than 1 is output [61]. Since ReLU does not adjust the weighted value when a function is not active, the gradient becomes 0. If the gradient is 0 and becomes scarce, it can interfere with GAN training, and the learning speed also slows down. Even though scarcity in a deep learning algorithm is often a desirable condition, this is not the case for GAN. To prevent such a problem, LeakyReLU is used. Since LeakyReLU allows the active value of the negative value, scarcity is alleviated.
The generator of Figure 3 also includes skip connections on output as a basic configuration. CIPS calculates color information using only random noise and location coordinates as a perceptron-based model [62]. Spatial convolution, upsampling calculation, and self-attention mechanisms are not used [27]. The reason for this is that the sharing of information between pixels can reduce the synthesis work efficiency in certain situations. The result, as shown in Table 4, was acquired by applying this algorithm to satellite landscapes, satellite buildings, and landscapes data set images [63,64]. CIPS has an excellent performance in both qualitative and quantitative assessments using FID, precision, and recall indicators [41,65]. CIPS can be used in various domains, such as foveated rendering, SR, etc. [66,67]. It shows especially high performance in the spectrum domain. If a coordinate grid is used, work can be carried out in a complicated structure such as cylindrical panoramas just by changing the basic coordinate system. In addition, CIPS can be expanded to other areas or other various uses and is deemed to be suitable for even observation data synthesis.

4. Trends in Other AI Research

Previously, we looked at how machine learning is being used in regard to space and the Moon, and how GANs are being studied. In this session, along with the prospects for the technologies that will be used in space in the future, we will explain the important problems we noted while investigating other studies, and the direction we need to take in the future.

4.1. Trends of AI Technology Used in Space

AI is being applied to various industries, and this trend applies to the space industry as well. Artemis 1, scheduled to be launched in March 2022, is planned to be equipped with an AI voice assistant [68]. Research continues on AIs mounted on robots for space resident base construction and resource exploration, which is planned until 2028.
AI technology is also being applied to spacecraft manufacturing and operation. AI algorithms are being applied to advanced guidance and control systems of aerospace [69], and explainable AI technologies that help pilots make decisions, are being studied [70]. In a similar context, research and surveys related to autonomous spacecraft operation are also being actively conducted [69,71]. In addition to this, research on the development of special materials used in the manufacture of spacecraft and the detection of damage to parts, is also in progress [72].

4.2. The Direction of Research Related to the Moon

This paper mainly deals with the detection of craters on the Moon. However, in the past, most of the performance was measured by comparing crater detection with human-labeled data. Recent studies on detecting craters that are difficult for humans to identify, have also been conducted, and 109,956 new craters have been identified. Of these, it is estimated that there are 18,966 craters larger than 8 km [73].
Machine learning technology is being used for tasks other than just crater detection. It has been shown that surface ice can be detected by removing noise in high-resolution images in the Permanently Shadowed Regions (PSR), further revealing previously undetectable geological features of craters [74,75]. Figure 4 shows the topographic map of the lunar south pole, including the PSRs (black polygons). The blue polygons are the study points [74].
While researching the Moon, the following problems were identified. (1) Moon data are limited and (2) the patterns of moon data are similar. The number of satellites that can collect moon data are limited, and acquiring new ones is also not easy. In addition, since the data are also collected only for a fixed object called the Moon, unlike the general computer vision problem, it is difficult to distinguish the differences within the data and it is difficult to generalize it to the computer vision problem. Therefore, it is thought that a study that explores data augmentation, such as our proposed method, or data such as the Moon, in reality, is necessary.

4.3. AI Applied to Remote Sensing Images

Various cases of applying AI to remote sensing images are summarized and explained. Tasks using remote sensing images are largely divided into change detection and SR tasks.
Change detection on an image or a video is the process of identifying differences in the state of an object or phenomenon by observing it at different times [76]. The procedure of the change detection method is as follows [77]. Figure 5 shows the process of change detection on the remote sensing images image.
Various methods, such as random forest, kNN, and DNN are used for change detection and are applied to various tasks such as urban change detection, landslide mapping, and the detection of human settlements [77]. Recently, a method using a famous transformer in computer vision has also been proposed, showing superior performance to existing models [78].
Image SR, which refers to the process of recovering HR images from LR images, is an important task in computer vision image processing techniques [79].
Recently, various deep learning methods have been used for SR, and general networks, residual networks, recursive networks, attention, and generative adversarial networks are used [80]. It has also been applied to remote sensing images and used for various tasks, as well as being applied to photos such as people, landscapes, animals, and cartoon images.

5. Conclusions

Currently, the Moon is rusting at its polar regions. The reason is attributed to a combination of water underneath the lunar surface and oxygen coming from the Earth; however, related research is still ongoing. Even though preceding research has been conducted, it has taken 10 years to announce this theory. Because only a small number of rusts exist on the other side of the Moon, it will likely take a considerable amount of time to fully formulate the theory. Therefore, more data are needed to raise the degree of the theory’s completion. Meanwhile, the reason that U.S.-based NASA changed the trajectory of their space probe, was to film more images for a longer period, showing how important the observation data are.
Before proceeding with the generation of images of the Moon, we reviewed GAN and the machine learning technologies that have been applied to the aerospace fields. In future studies, a lunar image generation experiment will be conducted to help identify the Moon’s oxidation process, by selecting the CIPS model in terms of independence of each pixel, and model scalability as a result of the review paper. In addition, the authors would be able to review the development of a service that combines the video image payload technology of the KARI with the image synthesis technology proposed in this paper.

Author Contributions

Conceptualization, J.-C.K., S.-C.L., J.C. and J.-H.H.; data curation, J.C. and J.-H.H.; formal analysis, J.-C.K., S.-C.L., J.C. and J.-H.H.; funding acquisition, J.-C.K.; methodology, J.-C.K., S.-C.L., J.C. and J.-H.H.; resources, J.C. and J.-H.H.; software, J.C. and J.-H.H.; supervision, J.C. and J.-H.H.; validation, J.-H.H.; visualization, J.C. and J.-H.H.; writing—original draft, J.-C.K., S.-C.L., J.C. and J.-H.H.; writing—review and editing, J.C. and J.-H.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by a Research promotion program of SCNU.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AIArtificial Intelligence
GANGenerative Adversarial Networks
KARIKorea Aerospace Research Institute
KPLOKorea Pathfinder Lunar Orbiter
M3Moon Mineralogy Mapper
NASANational Aeronautics and Space Administration
JPLJet Propulsion Laboratory
JAEAJapan Aerospace Exploration Agency
SELENESELenological and ENgineering Explorer
LUTILunar Terrain Imager
PolCamWide-Angle Polarimetric Camera
KMAGKPLO Magnetometer
KGRSKPLO Gamma-Ray Spectrometer
DTNDelay/Disruption Tolerant Network
CNNConvolutional Neural Networks
CDACrater Detection Algorithm
DNNDeep Neural Networks
CGANConditional GAN
EBGANEnergy-Based GAN
DCGANDeep Convolution GAN
PGGANProgressive Growing of GANs
LRLow-Resolution
DeLiGANGAN for Diverse and Limited Data
StyleGANStyle-based GAN
CIPSConditionally-Independent Pixel Synthesis
MLPMulti-Layer Perceptron
PixelDAPixel-Level Domain Adaptation
EMEarth Mover
WGANWasserstein GAN
LSGANLeast Squares GAN
TTURTwo Time-scale Update Rule
FIDFrechet Inception Distance
DiscoGANDiscovers Cross-Domain Relations with GANs
HMIHelioseismic and Magnetic Imager
SDOSolar Dynamics Observatory
AIAAtmospheric Imaging Assembly
EUVIExtreme Ultraviolet Imager
HRHigh-Resolution
EEGANGAN-based Edge-Enhancement Network
UDSNUltra-Dense SubNetwork
EESNEdge-Enhancement SubNetwokr
EESRGANEdge-Enhanced Super-Resolution
ESRGANEnhanced Super-Resolution GAN
EENEdge-Enhancement Network
SCAEStacked Convolutional Auto Encoder
SRSuper-Resolution
TGANTransferred GAN
SRCNNSuper-Resolution Convolution Neural Network
SRGANSuper-Resolution Generative Adversarial Network
ModFCModulated Fully-Connected
LeakyReLULeaky Rectified Linear Unit
ReLURectified Linear Unit
PSRPermanently Shadowed Regions

References

  1. The Moon Is Rusting, and Researchers Want to Know Why. Available online: https://www.nasa.gov/feature/jpl/the-moon-is-rusting-and-researchers-want-to-know-why (accessed on 17 October 2021).
  2. Li, S.; Lucey, P.G.; Fraeman, A.A.; Poppe, A.R.; Sun, V.Z.; Hurley, D.M.; Schultz, P.H. Widespread hematite at high latitudes of the Moon. Sci. Adv. 2020, 6, eaba1940. [Google Scholar] [CrossRef] [PubMed]
  3. SOLAR SYSTEM EXPLORATION. Available online: https://solarsystem.nasa.gov/moons/earths-moon/in-depth/#surface (accessed on 17 October 2021).
  4. What’s the Difference between a Meteor, Meteoroid, and Meteorite? Available online: https://solarsystem.nasa.gov/asteroids-comets-and-meteors/meteors-and-meteorites/overview/?page=0&per_page=40&order=id+asc&search=&condition_1=meteor_shower%3Abody_type (accessed on 17 October 2021).
  5. Terada, K.; Yokota, S.; Saito, Y.; Kitamura, N.; Asamura, K.; Nishino, M.N. Biogenic oxygen from Earth transported to the Moon by a wind of magnetospheric ions. Nat. Astron. 2017, 1, 1–5. [Google Scholar] [CrossRef]
  6. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
  7. Park, S.W.; Kim, D.Y. Performance Comparison of Convolution Neural Network by Weight Initialization and Parameter Update Method. J. Korea Multimed. Soc. 2018, 21, 441–449. [Google Scholar]
  8. Park, S.W.; Kim, D.Y. Comparison of Image Classification Performance by Activation Functions in Convolutional Neural Networks. J. Korea Multimed. Soc. 2018, 21, 1142–1149. [Google Scholar]
  9. Park, S.W.; Kim, J.C.; Kim, D.Y. A Study on Classification Performance Analysis of Convolutional Neural Network using Ensemble Learning Algorithm. J. Korea Multimed. Soc. 2019, 22, 665–675. [Google Scholar]
  10. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-scale Machine Learning on Heterogeneous Systems. arXiv 2015, arXiv:1603.04467, 1–19. [Google Scholar]
  11. Kim, T.; Park, E.; Lee, H.; Moon, Y.J.; Bae, S.H.; Lim, D.; Jang, S.; Kim, L.; Cho, I.H.; Choi, M.; et al. Solar farside magnetograms from deep learning analysis of STEREO/EUVI data. Nat. Astron. 2019, 3, 397–400. [Google Scholar] [CrossRef]
  12. Rengasamy, D.; Hervé, M.P.; Grazziela, F.P. Deep Learning Approaches to Aircraft Maintenance, Repair and Overhaul: A Review. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018. [Google Scholar]
  13. DeLatte, D.M.; Crites, S.T.; Guttenberg, N.; Yairi, T. Automated crater detection algorithms from a machine learning perspective in the convolutional neural network era. Adv. Space Res. 2019, 64, 1615–1628. [Google Scholar] [CrossRef]
  14. Lee, H. Trends in Deep Learning Technology to Improve Crater Recognition on the Moon. Curr. Ind. Technol. Trends Aerosp. 2019, 17, 103–112. [Google Scholar]
  15. Jia, Y.; Wan, G.; Liu, L.; Wang, J.; Wu, Y.; Xue, N.; Wang, Y.; Yang, R. Split-Attention Networks with Self-Calibrated Convolution for Moon Impact Crater Detection from Multi-Source Data. Remote Sens. 2021, 13, 3193. [Google Scholar] [CrossRef]
  16. Silburt, A.; Ali-Dib, M.; Zhu, C.; Jackson, A.; Valencia, D.; Kissin, Y.; Tamayo, D.; Menou, K. Lunar crater identification via deep learning. Icarus 2019, 317, 27–38. [Google Scholar] [CrossRef] [Green Version]
  17. Jia, Y.; Wan, G.; Liu, L.; Wu, Y.; Zhang, C. Automated Detection of Lunar Craters Using Deep Learning. In Proceedings of the 2020 IEEE 9th Joint International Information Technology and Artificial Intelligence Conference (ITAIC), Chongqing, China, 11–13 December 2020. [Google Scholar]
  18. Ali-Dib, M.; Menou, K.; Jackson, A.P.; Zhu, C.; Hammond, N. Automated crater shape retrieval using weakly-supervised deep learning. Icarus 2020, 345, 113749. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, S.; Li, Y.; Zhang, T.; Zhu, X.; Sun, S.; Gao, X. Lunar features detection for energy discovery via deep learning. Appl. Energy 2021, 296, 117085. [Google Scholar] [CrossRef]
  20. Wilhelm, T.; Grzeszick, R.; Fink, G.A.; Wöhler, C. Unsupervised Learning of Scene Categories on the Lunar Surface. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, Prague, Czech Republic, 25–27 February 2019; pp. 614–621. [Google Scholar]
  21. Roy, H.; Chaudhury, S.; Yamasaki, T.; DeLatte, D.; Ohtake, M.; Hashimoto, T. Lunar surface image restoration using U-net based deep neural networks. arXiv 2019, arXiv:1904.06683. [Google Scholar]
  22. Lesnikowski, A.; Bickel, V.T.; Angerhausen, D. Unsupervised distribution learning for lunar surface anomaly detection. arXiv 2020, arXiv:2001.04634. [Google Scholar]
  23. Xia, W.; Wang, X.; Zhao, S.; Jin, H.; Chen, X.; Yang, M.; Wu, X.; Hu, C.; Zhang, Y.; Shi, Y.; et al. New maps of lunar surface chemistry. Icarus 2019, 321, 200–215. [Google Scholar] [CrossRef]
  24. Mirza, M.; Osindero, S. Conditional Generative Adversarial Nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
  25. Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. Adv. Neural Inf. Process. Syst. 2016, 29, 2172–2180. [Google Scholar]
  26. Zhao, J.; Mathieu, M.; LeCun, Y. Energy-based Generative Adversarial Network. arXiv 2017, arXiv:1609.03126v4. [Google Scholar]
  27. Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2016, arXiv:1511.06434. [Google Scholar]
  28. Zhang, H.; Goodfellow, I.; Metaxas, D.; Odena, A. Self-Attention Generative Adversarial Networks. In Proceedings of the 36th International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; Volume 97, pp. 7354–7363. [Google Scholar]
  29. Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive Growing of GANs for Improved Quality, Stability, and Variation. In Proceedings of the 6th International Conference on Learning Representations, ICLR, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  30. Gurumurthy, S.; Kiran Sarvadevabhatla, R.; Venkatesh Babu, R. Deligan: Generative adversarial networks for diverse and limited data. In Proceedings of the IEEE conference on computer vision and pattern recognition, Honolulu, HI, USA, 22–25 July 2017; pp. 166–174. [Google Scholar]
  31. Brock, A.; Donahue, J.; Simonyan, K. Large Scale GAN Training for High Natural Image Synthesis. arXiv 2018, arXiv:1809.11096. [Google Scholar]
  32. Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 4401–4410. [Google Scholar]
  33. Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and Improving the Image Quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 8110–8119. [Google Scholar]
  34. Anokhin, I.; Demochkin, K.; Khakhulin, T.; Sterkin, G.; Lempotsky, V.; Korzhenkov, D. Image Generators With Conditionally-Independent Pixel Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 14278–14287. [Google Scholar]
  35. Mescheder, L.; Nowozin, S.; Geiger, A. The numerics of gans. In Proceedings of the Advances in Neural Information Processing Systems, NIPS, Long Beach, CA, USA, 4–9 December 2017; pp. 1825–1835. [Google Scholar]
  36. Bousmalis, K.; Silberman, N.; Dohan, D.; Erhan, D.; Krishnan, D. Unsupervised pixel-level domain adaptation with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Honolulu, Honolulu, HI, USA, 22–26 July 2017; pp. 3722–3731. [Google Scholar]
  37. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar]
  38. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved Training of Wasserstein GANs. arXiv 2017, arXiv:1704.00028. [Google Scholar]
  39. Berthelot, D.; Schumm, T.; Metz, L. BEGAN: Boundary Equilibrium Generative Adversarial Networks. arXiv 2017, arXiv:1703.10717v4. [Google Scholar]
  40. Miyato, T.; Kataoka, T.; Koyama, M.; Yoshida, Y. Spectral Normalization for Generative Adversarial Networks. In Proceedings of the ICLR, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
  41. Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.K.; Wang, Z.; Smolley, S.P. Least Squares Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, ICCV, Venice, Italy, 22–29 October 2017; pp. 2794–2802. [Google Scholar]
  42. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Proceedings of the 31st Conference on Neural Information Processing Systems, NIPS, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  43. Kim, T.; Cha, M.; Kim, H.; Lee, J.K.; Kim, J. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. In Proceedings of the 34th International Conference on Machine Learning, ICML, Sydney, Australia, 6–11 August 2017. [Google Scholar]
  44. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  45. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  46. Choi, Y.; Choi, M.; Kim, M.; Ha, J.W.; Kim, S.; Choo, J. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8789–8797. [Google Scholar]
  47. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training gans. Adv. Neural Inf. Process. Syst. 2016, 29, 2234–2242. [Google Scholar]
  48. Lucic, M.; Kurach, K.; Michalski, M.; Gelly, S.; Bousquet, O. Are GANs Created Equal? A Large-Scale Study. In Proceedings of the Advances in Neural Information Processing Systems, NeurIPS, Montreal, QC, Canada, 3–8 December 2018. [Google Scholar]
  49. Ma, J.; Yu, W.; Chen, C.; Liang, P.; Guo, X.; Juang, J. Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion. Inf. Fusion 2020, 62, 110–120. [Google Scholar] [CrossRef]
  50. Liu, Q.; Zhou, H.; Xu, Q.; Liu, X.; Wang, Y. PSGAN: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening. IEEE Trans. Geosci. Remote Sens. 2020, 59, 10227–10242. [Google Scholar] [CrossRef]
  51. Jiang, K.; Wang, Z.; Yi, P.; Wang, G.; Lu, T.; Jiang, J. Edge-Enhanced GAN for Remote Sensing Image Superresolution. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5799–5812. [Google Scholar] [CrossRef]
  52. Rabbi, J.; Ray, N.; Schubert, M.; Chowdhury, S.; Chao, D. Small-Object Detection in Remote Sensing Images with End-to-End Edge-Enhanced GAN and Object Detector Network. Remote Sens. 2020, 12, 1432. [Google Scholar] [CrossRef]
  53. Tao, Y.; Xu, M.; Zhong, Y.; Cheng, Y. GAN-Assisted Two-Stream Neural Network for High-Resolution Remote Sensing Image Classification. Remote Sens. 2017, 9, 1328. [Google Scholar] [CrossRef] [Green Version]
  54. Gong, Y.; Liao, P.; Zhang, X.; Zhang, L.; Chen, G.; Zhu, K.; Tan, X.; Lv, Z. Enlighten-GAN for Super Resolution Reconstruction in Mid-Resolution Remote Sensing Images. Remote Sens. 2021, 13, 1104. [Google Scholar] [CrossRef]
  55. Ma, W.; Pan, Z.; Guo, J.; Lei, B. Super-Resolution of Remote Sensing Images Based on Transferred Generative Adversarial Network. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018. [Google Scholar]
  56. Park, S.W.; Ko, J.S.; Huh, J.H.; Kim, J.C. Review on Generative Adversarial Networks: Focusing on Computer Vision and Its Applications. Electronics 2021, 10, 1216. [Google Scholar] [CrossRef]
  57. Park, S.W.; Huh, J.H.; Kim, J.C. BEGAN v3: Avoiding Mode Collapse in GANs Using Variational Inference. Electronics 2020, 9, 688. [Google Scholar] [CrossRef] [Green Version]
  58. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog 2019, 1, 9. [Google Scholar]
  59. Payne, C. MuseNet. Available online: https://openai.com/blog/musenet (accessed on 17 October 2021).
  60. Mass, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In Proceeding of International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; p. 30. [Google Scholar]
  61. Nair, V.; Hinton, G. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  62. Stephen, I. Perceptron-based learning algorithms. IEEE Trans. Neural Netw. 1990, 1, 50. [Google Scholar]
  63. Google Earth View. Available online: https://earthview.withgoogle.com/ (accessed on 17 October 2021).
  64. AIcrowd. Available online: https://www.crowdai.org/challenges/mapping-challenge (accessed on 17 October 2021).
  65. Sajjadi, S.M.; Bachem, O.; Lucic, M.; Bousquet, O.; Gelly, S. Assessing generative models via precision and recall. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 3–8 December 2018; pp. 5228–5237. [Google Scholar]
  66. Kaplanyan, A.S.; Sochenov, A.; Leimkühler, T.; Okunev, M.; Goodall, T.; Rufo, G. DeepFovea: Neural reconstruction for foveated rendering and video compression using learned statistics of natural videos. ACM Trans. Graph. 2019, 38, 1–13. [Google Scholar] [CrossRef] [Green Version]
  67. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690.
  68. Grush, L. Amazon’s Alexa and Cisco’s Webex Are Heading to Deep Space on NASA’s Upcoming Moon Mission. Available online: https://www.theverge.com/2022/1/5/22866746/nasa-artemis-i-amazon-alexa-cisco-webex-lockheed-martin-orion (accessed on 25 February 2022).
  69. Chai, R.; Tsourdos, A.; Savvaris, A.; Chai, S.; Xia, Y.; Chen, C.L.P. Review of advanced guidance and control algorithm for space/aerospace vehicles. Prog. Aerosp. Sci. 2021, 122, 100696. [Google Scholar] [CrossRef]
  70. Sutthithatip, S.; Perinpanayagam, S.; Aslam, S.; Wileman, A. Explainable AI in Aerospace for Enhanced System Performance. In Proceedings of the 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), San Antonio, TX, USA, 3–7 October 2021. [Google Scholar]
  71. Starek, J.A.; Açıkmeşe, B.; Nesnas, I.A.; Pavone, M. Spacecraft autonomy challenges for next-generation space missions. In Advances in Control System Technology for Aerospace Applications; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1–48. [Google Scholar]
  72. Das, M.; Sahu, S.; Parhi, D.R. Composite materials and their damage detection using AI techniques for aerospace application: A brief review. Mater. Today Proc. 2021, 44, 955–960. [Google Scholar] [CrossRef]
  73. Yang, C.; Zhao, H.; Bruzzone, L.; Benediktsson, J.A.; Liang, Y.; Liu, B.; Zeng, X.; Guan, R.; Li, C.; Ouyang, Z. Lunar impact crater identification and age estimation with Chang’E data by deep and transfer learning. Nat. Commun. 2020, 11, 1–15. [Google Scholar] [CrossRef]
  74. Bickel, V.T.; Moseley, B.; Lopez-Francos, I.; Shirley, M. Peering into lunar permanently shadowed regions with deep learning. Nat. Commun. 2021, 12, 1–12. [Google Scholar] [CrossRef]
  75. Moseley, B.; Bickel, V.; Lopez-Francos, I.G.; Rana, L. Extreme Low-Light Environment-Driven Image Denoising over Permanently Shadowed Lunar Regions with a Physical Noise Model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 6317–6327. [Google Scholar]
  76. Singh, A. Review Article Digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef] [Green Version]
  77. Asokan, A.; Anitha, J. Change detection techniques for remote sensing applications: A survey. Earth Sci. Inf. 2019, 12, 143–160. [Google Scholar] [CrossRef]
  78. Bandara, W.G.C.; Patel, V.M. A Transformer-Based Siamese Network for Change Detection. arXiv 2022, arXiv:2201.01293. [Google Scholar]
  79. Wang, Z.; Chen, J.; Hoi, S.C.H. Deep Learning for Image Super-Resolution: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 3365–3387. [Google Scholar] [CrossRef] [Green Version]
  80. Anwar, S.; Khan, S.; Barnes, N. A Deep Journey into Super-resolution: A Survey. ACM Comput. Surv. 2021, 53, 1–34. [Google Scholar] [CrossRef]
Figure 1. Image of the lateral magnetic field of the Sun, restored with GAN. (a) Image of extreme ultraviolet ray observed from a satellite. (b) Image of lateral magnetic field generated by GAN. (c) Image of magnetic field observed from a satellite.
Figure 1. Image of the lateral magnetic field of the Sun, restored with GAN. (a) Image of extreme ultraviolet ray observed from a satellite. (b) Image of lateral magnetic field generated by GAN. (c) Image of magnetic field observed from a satellite.
Electronics 11 01303 g001
Figure 2. GAN Model.
Figure 2. GAN Model.
Electronics 11 01303 g002
Figure 3. Generator of CIPS.
Figure 3. Generator of CIPS.
Electronics 11 01303 g003
Figure 4. Topographic map of lunar South Pole, including PSRs (black polygons); blue polygons are potential surface ice.
Figure 4. Topographic map of lunar South Pole, including PSRs (black polygons); blue polygons are potential surface ice.
Electronics 11 01303 g004
Figure 5. The process of the change detection on remote sensing images.
Figure 5. The process of the change detection on remote sensing images.
Electronics 11 01303 g005
Table 1. Research cases that have applied machine learning techniques to the Moon.
Table 1. Research cases that have applied machine learning techniques to the Moon.
AuthorResearch AreaUsed Model
Delatte et al. [13]Survey Papers
(Usually, crater detection)
-
Lee Honnhee [14]Review Papers
(Usually, crater detection)
-
Jia et al. [15]Lunar surface detectionSelf-calibrated convolution
Silburt et al. [16]Lunar surface detectionCNNs (based U-Net)
Yutong Jia et al. [17]Lunar surface detectionCNNs (based U-Net)
Ali-Dib et al. [18]Lunar surface detectionCNNs (based Mask R-CNN)
Shen et al. [19]Lunar surface detectionHigh-Resolution-Moon-Net
Wilhelm et al. [20]Unsupervised learningCNNs (based VGG16)
Roy et al. [21]Unsupervised learningCNNs (based U-Net)
Lesnikowski et al. [22]Unsupervised learningCNNs (based VAE)
Xia et al. [23]Abundance map of
oxide and magnesium
DNN
Table 2. Classification of GANs models.
Table 2. Classification of GANs models.
CategoryModel/Method
ArchitectureCGAN [24]; InfoGAN [25]; EBGAN [26]; DCGAN [27]; SAGAN [28]; PGGAN [29]; DeLiGAN [30]; BigGAN [31]; StyleGAN [32]; StyleGANv2 [33]; CIPS [34]
StabilityConsensus Optimization [35]; PixelDA [36]; WGAN [37]; Gradient penalty [38]; BEGAN [39]; Spectral Normalization [40]; LSGAN [41]; TTUR & FID [42]
Image-to-Image TranslationDiscoGAN [43]; pix2pix [44]; CycleGAN [45]; StarGAN [46]
Table 3. Research cases that have applied GAN techniques to aerospace mission.
Table 3. Research cases that have applied GAN techniques to aerospace mission.
AuthorUsed ModelResearch Area
Kim et al. [48]CGANImage Generating
Jiayi Ma et al. [49]Pan-GANPan-sharpening
Qingjie Liu et al. [50]PSGANPan-sharpening
Kui Jiang et al. [51]EEGANSR
Jakaria Rabbit et al. [52]EESRGANSR + Object Detection
Yiting Tao et al. [53]Residual network + SCAESR
Yuanfu Gong et al. [54]Enlighten-GANSR
Wen Ma et al. [55]TGANSR
Table 4. Conditionally-Independent Pixel Synthesis (CIPS) generator trained with various datasets.
Table 4. Conditionally-Independent Pixel Synthesis (CIPS) generator trained with various datasets.
DatasetVisual Results
Satellite-Landscapes Electronics 11 01303 i001 Electronics 11 01303 i002 Electronics 11 01303 i003 Electronics 11 01303 i004 Electronics 11 01303 i005 Electronics 11 01303 i006 Electronics 11 01303 i007 Electronics 11 01303 i008
Satellite-Buildings Electronics 11 01303 i009 Electronics 11 01303 i010 Electronics 11 01303 i011 Electronics 11 01303 i012 Electronics 11 01303 i013 Electronics 11 01303 i014 Electronics 11 01303 i015 Electronics 11 01303 i016
Landscapes Electronics 11 01303 i017
Electronics 11 01303 i018
Electronics 11 01303 i019
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kim, J.-C.; Lim, S.-C.; Choi, J.; Huh, J.-H. Review for Examining the Oxidation Process of the Moon Using Generative Adversarial Networks: Focusing on Landscape of Moon. Electronics 2022, 11, 1303. https://doi.org/10.3390/electronics11091303

AMA Style

Kim J-C, Lim S-C, Choi J, Huh J-H. Review for Examining the Oxidation Process of the Moon Using Generative Adversarial Networks: Focusing on Landscape of Moon. Electronics. 2022; 11(9):1303. https://doi.org/10.3390/electronics11091303

Chicago/Turabian Style

Kim, Jong-Chan, Su-Chang Lim, Jaehyeon Choi, and Jun-Ho Huh. 2022. "Review for Examining the Oxidation Process of the Moon Using Generative Adversarial Networks: Focusing on Landscape of Moon" Electronics 11, no. 9: 1303. https://doi.org/10.3390/electronics11091303

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop