Next Article in Journal
Performance Analysis of Orthogonal Multiplexing Techniques for PLC Systems with Low Cyclic Prefix Length and Symbol Timing Offset
Previous Article in Journal
Flamingo-Optimization-Based Deep Convolutional Neural Network for IoT-Based Arrhythmia Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On-Orbit Modulation Transfer Function Estimation Based on the Refined Image Kernel

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Chang Guang Satellite Technology Co., Ltd., Changchun 130102, China
4
Key Laboratory of Advanced Technology for Aerospace Vehicles of Liaoning Province, Dalian University of Technology, Dalian 116024, China
5
State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(9), 4362; https://doi.org/10.3390/s23094362
Submission received: 17 February 2023 / Revised: 14 April 2023 / Accepted: 26 April 2023 / Published: 28 April 2023
(This article belongs to the Section Remote Sensors)

Abstract

:
To overcome the limitations of traditional on-orbit modulation function transfer (MTF) measurement methods that are heavily dependent on natural features, scenery, artificial edges, and point source targets, this paper presents an on-orbit MTF measurement method of remote sensing imager based on the refined image kernel (RIK) acquired directly from remote sensing images. First, the kernel is estimated from some remote sensing sub-images with rich texture details by using an iterative support detection (ISD) algorithm; then, it is refined by central pixel energy concentration (EC) to obtain the RIK. Secondly, the MTF curves are calculated by interpolating RIK and Fourier transform. Finally, the final MTF is the average value of MTFs at Nyquist frequency acquired by each RIK. To demonstrate the feasibility and validity of this method, the MTFs were compared to the result of the ISO12233 edge method with an error of no more than 7%. The relative error of the measured results does not exceed 5% for image signal-to-noise ratio (SNR) above 20dB. The results obtained from the on-orbit MTF measurement using remote sensing images of the Jilin-1 satellite have a maximum error of less than 2% compared with the ISO12233 edge method. These demonstrate that the method proposed in this paper supplies highly accurate and robust results and can successfully increase the efficiency of on-orbit MTF measurement, providing a reference for high-frequency monitoring of satellite on-orbit stability and their optical imaging quality.

1. Introduction

High-resolution optical remote sensing satellites have undergone extensive development in recent years as a result of the continuous advancements in science and technology, and the resulting high-resolution remote sensing images have extremely broad application potential as well as significant values in fields such as natural resource analysis, ecological environmental protection, and geographic mapping [1,2,3,4]. When discussing the imaging performance of an optical remote sensing camera, a crucial component of a remote sensing satellite, the MTF is typically employed to characterize the response of an imaging system to various spatial frequency input signals [5,6,7]. Although the camera is calibrated and evaluated in the lab before launch, the actual on-orbit MTF fluctuates to variable degrees due to factors including launch-related vibrations and the space environment [8,9,10,11]. Therefore, it is necessary to measure the on-orbit MTF to monitor the actual performance of on-orbit cameras.
The on-orbit MTF of the remote sensing imager is typically obtained by digital Fourier analysis, which is based on the object–image correlation of the optical system. The captured image and the chosen reference target characteristics are then digitally processed to extract the on-orbit MTF, which is currently available in a variety of methods, including the knife-edge method, the point light source method, the pulse method, and the periodic target method [12,13,14,15,16,17]. These methods are used by high-resolution satellites, including QuickBird, IKONOS, SPOT, Landsat, and GF, for on-orbit MTF measurements. Each technique has its benefits and has shown itself to be somewhat successful [18,19,20,21,22]. However, it is challenging to detect ideal knife edges and point sources in natural features, and because knife edges lack a frequency component, additional line spread function (LSF) extraction is required to resolve the frequency information, and this process is easily impacted by the contrast and noise of natural characteristics, which increases errors and reduces the accuracy of on-orbit image quality assessment [23]. Identifying optimal pulsed targets is difficult due to divides in satellite resolution, and it can take some time to locate pulsed natural features in a fixed direction. Additionally, the pulse width can have an impact on measurement accuracy. Point source targets require strict control of the irradiance of the light source to prevent overexposure and affect the accuracy of MTF measurement. Manual targets require labor-intensive setup, and the amount of data that can be caught by MTF detection in the fixed direction of a periodic target is insufficient. The time required to acquire MTFs and the expense of on-orbit MTF measurements grow due to the limits of specific orbits and revisit periods of remote sensing satellites.
In this paper, first, an approach of extracting image kernels is employed to establish and illustrate how to estimate the MTF by image kernels; the fundamentals of the kernel estimation algorithm and the calculation of the MTF are presented in Section 2. Secondly, in Section 3, the factors affecting the accuracy of kernel estimation are given, and the refinement of the image kernel by energy concentration to obtain a refined image kernel is investigated. Then, the validation experiments for the suggested method are thoroughly described, and the effects of dynamic MTF levels and image SNR on measurement outcomes are also covered. These contents are detailed in Section 4 and Section 5. Finally, the method proposed in this paper is applied to the measurement of the on-orbit MTF of the Jilin-1 satellite remote sensing imager, and the comparison of the measurement results with those obtained by the conventional method is also analyzed in Section 6.
The highlights of this paper are as follows: (1) A method for measuring the on-orbit MTF of remote sensing satellite optical payloads is proposed, which does not rely on feature scenery (such as rooflines, farmlands, roadways) or artificially established edges and point source targets, and extracts the image kernel directly from remote sensing images with rich texture details, and estimates the MTF by the refined image kernel obtained from refining the kernel using energy concentration; (2) the method greatly simplifies the process of measuring the on-orbit MTF. In theory, only a cloud-free detail-rich remote sensing image is needed to estimate the on-orbit MTF of a remote sensing satellite imager, and the measurement result has only a small error compared with the traditional method; (3) this method can achieve high-frequency measurement and real-time monitoring of the on-orbit imaging quality of optical remote sensing satellites.

2. Fundamentals of Measurement

2.1. Modeling of Modulation Transfer Function

Optical remote sensing imagers can be thought of as linear systems. It is feasible to model the imaging procedure as a convolution operation. The relation between the background object and the image is usually described (illustrated in Figure 1) as
g ( x , y ) = o ( x , y ) k ( x , y ) + ε
where ( x , y ) is the coordinate on a continuous spatial domain, g ( x , y ) denotes the image captured by the camera, o ( x , y ) is the objective scene, k ( x , y ) is the kernel, which is an imaging system can be thought of as the point spread function (PSF) of the optical system, ε indicates the image noise, and represents the convolution operation.
Point spread function can be used to describe the blurring of an optical remote sensing camera when acquiring an image, etc., for an imaging system where the input is an ideal point source; the output is no longer a point but a diffused spot. The kernel extracted from the image is interpolated and fitted to obtain the system PSF, and then the Fourier transform is carried out to obtain the optical transfer function (OTF) is
O T F ( u , v ) = k ( x , y ) exp [ 2 i π ( x u + y v ) ] d x d y
where u and v are the spatial frequencies in the frequency domain along the two coordinate directions, respectively. The MTF can be expressed as
M T F ( u , v ) = | O T F ( u , v ) |
since the MTF calculation using a single kernel may have a relatively large error (for reasons mentioned in Section 3), we extract the kernel from serval sub-images and calculate the MTF separately and then average them to obtain the final measured MTF as
M T F = i = 1 n M T F i n

2.2. Kernel Estimation Method

Estimating the kernel from the image is crucial for the on-orbit MTF measurement method provided in this study. The image kernel is mostly adopted in blind image deblurring, i.e., estimating the blur kernel and the potential clear image from the input blur image. However, this is an ill-posed problem as there can be infinite pairs of blur kernels and latent images to generate the same blurred image. Therefore, sparse prior and regularization have been given as options for the ill-posed optimization problem, such as Maximum A Posteriori (MAP), L 0 regularized prior, dark channel prior, learned image prior using a CNN, and Gaussian prior [24,25].
In this regard, we evaluate the kernel using the iterative support detection (ISD) algorithm [26], which consists of two steps. Without imposing much sparsity, the first step seeks to efficiently compute a coarse form of the kernel. Although non-convex optimization is used in the second phase, the initial kernel estimation from step one is carried over.

2.2.1. Estimation of the Initial Kernel

To direct the initial kernel, we filter the image and predict the salient edges. To obtain meaningful step edges, we first pre-smooth the image using Gaussian filtering and then solve the following shock filtering partial differential equation problem as
I / t = s i g n ( Δ I ) I   ,   I 0 = G 0 I i n p u t
where I = ( I x , I y ) , Δ I = I x 2 I x x + 2 I x I y + I y 2 I y y denote the first- and second-order spatial derivatives correspondingly, and I 0 indicates the Gaussian smoothed input image, which is used as the starting input for iteratively updating I / t .
According to previous research [27], salient edges may not always contribute to the estimation of the kernel, and if the kernel scale is greater than the object scale, the edge information of the image may be detrimental to the kernel estimate process. To properly assess the kernel, the edge information in the image must be filtered, and the availability of edges in the image can be determined by the edge confidence level as
R ( x ) = y N h ( x ) B ( y ) y N h ( x ) B ( y ) + 0.5
where B is the input image and N h ( x ) is a window of size h × h centered on the pixel x . The 0.5 is set to avoid an illogical image flattening region. The signed B ( y ) will mainly cancel out in y N h ( x ) B ( y ) for thin elements (peaks). The level of image structure information in the N h ( x ) is estimated by y N h ( x ) B ( y ) , which stands for the sum of the absolute gradient magnitudes of image B in the window. A tiny R suggests the presence of either spikes or a flat region, which inactivates some gradient components. Then, a mask was used to exclude pixels within small R ( x ) windows as
M = H ( R τ r ) ,
where H (   ·   ) denotes the Heaviside step function, producing zeros for negative values and ones for other values, and τ r is a defined threshold. For kernel estimation, the ultimate salient edges selected are
I s = I ˜ H ( M I ˜ 2 τ s ) ,
where I ˜ is the image that the shock filter has processed and τ s denotes a threshold of the gradient magnitude. By excluding part of the gradients through Equation (6), the accuracy of the kernel estimation is improved. During the iteration, the values of τ r and τ s are initially 0.1 and 0.05 and decrease (divide by 1.1 in each iteration) to obtain more edge information. The initial kernel estimation is achieved using the image edges detection and filtering described above, then the objective function with a Gaussian regularizer is described as
E ( k ) = I s k B 2 + γ k 2 ,
where γ is a weight. We perform FFTs on all of the variables, set the derivative k to zero, then obtain
k = 1 ( ( x I s ) ¯ ( x B ) + ( y I s ) ¯ ( y B ) ( x I s ) 2 + ( y I s ) 2 + γ ) ,
where (   ·   ) and 1 (   ·   ) represent the FFT and inverse FFT, respectively. (   ·   ) ¯ is the complex conjugate operator.
Predicting the latent image I of the following layer in the image pyramid by using the previously calculated significant edge gradient as a spatial prior, the objective function is
E ( I ) = I k B 2 + λ I I s 2 ,
where the regularization parameter λ = 2 e 2 . A few algebraic procedures in the frequency domain yield the following results:
I = 1 ( ( k ) ¯ ( B ) + λ ( x ) ¯ ( I x s ) + ( y ) ¯ ( I y s ) ( k ) ¯ ( k ) + λ ( ( x ) ¯ ( x ) + ( y ) ¯ ( y ) ) )

2.2.2. Kernel Elaboration Based on the ISD Algorithm

After the initial kernel estimation, the procedure is continued using an iterative support detection (ISD) method to ensure that the elements are maximal in each iteration and to overcome the effect of noise on the kernel estimation process. By easing the regularization penalty so that these elements are not considerably impacted by the regularization at each refinement, the approach decreases inaccurate estimates and converges quickly [28].
Each iteration of the ISD begins with the previously estimated kernel k i as a partial support, with the vast elements placed in S i + 1 and the others relating to S i + 1 ¯ . S i + 1 is generated by
S i + 1 { j : k j i > φ s }
where j stands for the index value of the element in k i . φ s is a positive number and receives partial support for each iteration. The “first significant jump” rule is used to configure φ s . Specifically, arrange all of the elements k j in ascending order, then calculate the difference between two adjacent elements to produce d 0 , d 1 , …, and set the condition to detect each difference beginning from d 0 and seek for the first element, e.g., d j > k i / ( 2 h i ) , where h is the kernel size and k i achieved the maximum value in k i . Assigning elements with index values j that meet the criterion to φ s , leads to an adaptive kernel elaboration process, as each element is penalized less in the iterative phase. The objective function of minimization is then expressed as
E ( k ) = 1 2 I s B 2 + γ j S i + 1 ¯ | k j |

3. Influences on the Accuracy of Kernel Estimation

The features of images have a massive effect on the estimation results of the kernel. Additionally, because it is not well-posed to solve the problem of the blur kernel and the sharp image at the same time, there are variances in the kernels acquired by different algorithms according to the application of blind image deconvolution methods. This is covered in more detail below in terms of both the kernel estimate principle and its practical application.
According to the introduction in Section 2, the computational principle states that salient edges in the images are first screened before their gradients are employed to determine the kernel. There is not enough gradient of the significant edges to calculate an exact kernel if the image is flatter, has less texture detail, and has more low-frequency information. To extract the kernel, image regions with lots of texture detail are selected.
From the actual extraction effect of the kernel, as shown in Figure 2, where (a) to (d) are farmland, forest, river, and ocean, respectively, the images are flatter and have less high-frequency information, so the kernel is more influenced by the image characteristics. The results of other urban pictures with rich texture features (e) to (h) are more consistent, and the kernels of several images are similar, with minor variations within a given error level.
In summary, the precision of the kernel estimation is influenced by the image characteristics. Typically, the more texture detail an image includes, the better the kernel can be calculated. Although the kernel for different portions of the image is generally stable, it might vary within a specific error range. Moreover, there is no explicit quantitative relationship between the richness of texture detail in an image and the accuracy of the kernel. Therefore, we use the energy concentration (EC) of the kernel to ascertain whether the kernel is valid and refine them to obtain the RIK, which is used to calculate the MTF.
The EC of the kernel indicates the extent to which the energy of the image kernel is aggregated, with a value of
E C = p q D p q 2 i j D i j 2 ,
where ( p , q ) denotes the coordinates of each pixel in the central region of the kernel, ( i , j ) represents the coordinates of all pixels in the kernel, and D is the corresponding grayscale for each pixel.

4. Description of the Process

The flow of the MTF measurement method based on the RIK can be summarized in Figure 3.
The entire method is split into five steps:
  • In the target image, select several sub-images with rich texture details, every 500 × 500 pixels in size;
  • For each sub-image, evaluate the kernel using the principles and computational calculation process given in Section 2.2 of this study;
  • Calculate the central pixel energy concentration of each kernel according to Equation (15). If a discrete value is too high or too low, the related kernel is deemed unreliable and rejected. After refining, the refined image kernels are obtained;
  • The refined image kernel is interpolated to build the PSF, and FFT is performed to obtain the 2-D MTF. The longitudinal and transverse directions are selected to obtain the MTF curves in both directions, and the MTFs at the Nyquist frequency are picked;
  • The final MTF is determined by averaging the MTFs of the two directions from step 4.

5. Ground Experimental Results and Analysis

To verify the feasibility and accuracy, the MTF measured by the method presented in this paper was compared with the result of the ISO12233 edge method, which is widely used [13].

5.1. Validation Experiment of the MTF Measurement Method

Imaging tests were performed in the lab with a high-performance backside-illuminated COMS (Complementary Metal Oxide Semiconductor) camera to confirm the viability of our method. As shown in Figure 4b,c, the target scene and the knife edge are captured simultaneously by the camera, and the image quality is similar across the camera’s fields of view, providing good consistency. Table 1 shows the MTFs at Nyquist frequency in two directions as measured by the ISO12233 edge method, where MTFy and MTFx stand for the longitudinal and transverse MTFs, respectively.
In following the steps described in Section 4, 12 sub-images with a size of 500 × 500 pixels were chosen, and the kernels were then individually obtained. Figure 5 and Figure 6, respectively, display the sub-images and accompanying kernels. Limited by the experimental conditions, the information contained in the acquired images is not fully available for the precise calculation of the kernel, so the 12 sub-images selected are partially overlapping.
According to Equation (15), the 1 × 1-pixel EC for each kernel is computed (shown in Table 2). The data are all around 0.6 and relatively smooth, with no discrete values that fluctuate significantly. As a result, the kernels of the 12 sub-images can be regarded as suitable for further MTFs calculation. Table 3 illustrates the MTFs at the Nyquist frequency of various orientations obtained by the RIK, and it is obvious that the values are all close to 0.3.
Table 4 displays the longitudinal and transverse average MTFs obtained by two methods, with errors of 6.83% and 1.56%, respectively, when compared to the outcomes obtained by the traditional method, illustrative of the viability of our approach.

5.2. The Effect of Image MTF Levels on Measurement Accuracy

Any alteration in the space environment and its internal factors may affect the optical system during the assembly, transportation, launch, and on-orbit operation of satellites, shifting the MTF of the whole system [29]. The measurements may have varied degrees of error with different image MTF levels. To determine how these variations in MTF levels affect measurement accuracy, further research is required.
In the lab, various amounts of defocus are applied to the camera to produce groups of images with varying MTF levels. The MTF value is then evaluated using the ISO12233 edge method, and the findings are used as a benchmark. Afterward, we measured the MTF using the method suggested in this paper and obtained the relative error of the measured value for various image MTF levels, as shown in Figure 7.
The figure shows that when the image MTF is at a low level, there is a minor absolute error between the measurement findings and the reference value but a large relative error, making the measurement results at this point untrustworthy. In the latter sets of data, as the image MTF increases, the relative error quickly converges to an acceptable range, so we believe that the results obtained by the method proposed in this paper are generally accurate when the image MTF is greater than 0.1. As far as we know, the dynamic MTF of current on-orbit remote sensing satellites is typically in the range of 0.07 to 0.2 according to the design requirements, and within this range, the method suggested in this paper can meet the precision requirements of on-orbit MTF measurement.

5.3. The Influence of Image SNR on MTF Measurement Results

As was discussed in Section 5.1, it can be seen that our method yields satisfactory results in an ideal environment such as the lab; at this time, the SNR of all images captured by the camera is above 40 dB. However, while space imagers are in orbit, the sensor circuitry and exogenous noise brought on by the environment typically perturb them, leading to distorted and degraded remote sensing images [30]. Therefore, it is crucial to investigate how image noise impacts the accuracy of MTF measurement.
The 12 sub-images chosen in the previous part are combined with the noise, then the kernel is obtained, and the MTF is determined. The MTFs obtained from the sub-images with different SNRs are compared with the MTFs in the noise-free case, and the relative error of the MTFs at different SNRs is shown in Figure 8.
It can be seen that at an image SNR of 15 dB, the relative error of the MTF fluctuates around 15% and can reach a maximum of over 20%; in this case, the result is seriously affected by noise. However, as the SNR increases, the relative error converges quickly and is below 5% (mean value). Therefore, these analyses indicate that the method proposed in this paper has good robustness when the image SNR is above 20 dB.

6. Application of On-Orbit Satellite MTF Assessment

As shown in Figure 9, the knife-edge target of the calibration site was employed to test the MTF of the Jilin-1 satellite, and the results are shown in Table 5. This method requires satellites to image a specific location. Due to the restriction of satellite orbit and return visit period, it is difficult to achieve frequent intensive monitoring. Therefore, we attempt to estimate the on-orbit MTF using the method presented in this research. We obtained 10 cloud-free sub-images with rich texture detail picked from the image acquired by Jilin-1 with a size of 500 × 500 pixels (illustrated in Figure 10) and their corresponding kernels (illustrated in Figure 11).
The central pixel EC of each kernel is shown in Table 6. Most of the EC is around 0.45, besides the 4th, which is below 0.3, much lower than the others and should be considered discrete values, so the kernel of the fourth sub-image will be excluded from the subsequent MTF calculation.
After removing sub-image 4, the two directions MTFs at the Nyquist frequency were calculated from the RIK, as shown in Table 7. It is easy to see that the transverse MTF is around 0.15 and 0.25 in the longitudinal, and the average MTFs for each direction are displayed in Table 8. The error of the horizontal and vertical MTF values are 0.14% and 1.38%, respectively, in comparison to the measurement results obtained using the ISO12233 edge method. This excellent agreement further confirms the viability and effectiveness of the optical remote sensing sensors on-orbit MTF measurement method based on the RIK introduced in this paper.

7. Conclusions

Based on the drawbacks of traditional MTF measurement methods, this paper proposes an on-orbit MTF measurement method for remote sensing imagers, which is based on RIK instead of natural feature sceneries or artificial knife edges, point sources, and other targets. RIK is given by EC refinement of the image kernel extracted from remote sensing images with rich texture details (such as cities and buildings) directly using the ISD algorithm. Then the PSF is built by interpolating the RIK. Finally, the MTF of the optical system is calculated by the Fourier transform. Compared with the conventional method, the measured MTFs at the Nyquist frequency have an error of no more than 7% in the lab, and the on-orbit MTF of the Jilin-1 satellite measured from remote sensing images with a maximum error of no more than 2%, which proves the feasibility and validity of our method. At image SNR of 20dB~40dB, the average relative error of the measured findings is below 5%, demonstrating the good robustness of this method. This paper contributes an innovative strategy for on-orbit imaging quality assessment of high-resolution remote sensing satellite space optical payloads. On-orbit optical remote sensing satellites can continuously transmit remote sensing images to the ground, and it is easy to find sub-images within these that contain rich details (e.g., buildings, streets, etc.). Therefore, in theory, by using the method proposed in this paper, it is possible to perform high-frequency measurements of the in-orbit MTF of remote sensing satellite optical payloads and to achieve real-time monitoring of their imaging quality.

Author Contributions

Conceptualization, Y.W. and X.Z.; methodology, Y.W. and X.Z.; software, Y.W.; validation, Y.W., X.Z. and formal analysis, Z.Q., L.L. and S.W.; investigation, C.Z.; writing—original draft preparation, Y.W. and X.Z.; writing—review and editing, Z.Q., L.L. and S.W.; supervision, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2019YFE0127000.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Dusseux, P.; Hubert-Moy, L.; Corpetti, T.; Vertès, F. Evaluation of SPOT imagery for the estimation of grassland biomass. Int. J. Appl. Earth Obs. Geoinf. 2015, 38, 72–77. [Google Scholar] [CrossRef]
  2. Kim, D.; Park, M.S.; Park, Y.J.; Kim, W. Geostationary Ocean Color Imager (GOCI) Marine Fog Detection in Combination with Himawari-8 Based on the Decision Tree. Remote Sens. 2020, 12, 149. [Google Scholar] [CrossRef]
  3. Yun, R.; Zhu, C.; Xiao, S. Deformable Faster R-CNN with Aggregating Multi-Layer Features for Partially Occluded Object Detection in Optical Remote Sensing Images. Remote Sens. 2018, 10, 1470. [Google Scholar] [CrossRef]
  4. Chathura, W.; Simon, J.; Karin, R.; Luke, W. Development of a Multi-Spatial Resolution Approach to the Surveillance of Active Fire Lines Using Himawari-8. Remote Sens. 2016, 8, 932. [Google Scholar] [CrossRef]
  5. Kenichiro; Masaoka. Practical edge-based modulation transfer function measurement. Opt. Express 2019, 27, 1345–1352. [Google Scholar] [CrossRef]
  6. Fang, Y.C.; Tzeng, Y.F.; Wu, K.Y.; Tsay, H.L.; Lin, P.M. Measurement and analysis of modulation transfer function of digital image sensors. Microsyst. Technol. 2022, 28, 137–142. [Google Scholar] [CrossRef]
  7. Masaoka, K. Edge-based modulation transfer function measurement method using a variable oversampling ratio. Opt. Express 2021, 29, 37628–37638. [Google Scholar] [CrossRef] [PubMed]
  8. Cui, L.y.; Xue, B.d.; Cao, X.g.; Dong, J.k.; Wang, J.n. Generalized atmospheric turbulence MTF for wave propagating through non-Kolmogorov turbulence. Opt. Express 2010, 18, 21269–21283. [Google Scholar] [CrossRef] [PubMed]
  9. Oh, E.; Choi, J.K. GOCI image enhancement using an MTF compensation technique for coastal water applications. Opt. Express 2014, 22, 26908–26918. [Google Scholar] [CrossRef] [PubMed]
  10. Vettenburg, T.; Bustin, N.; Harvey, A.R. Fidelity optimization for aberration-tolerant hybrid imaging systems. Opt. Express 2010, 18, 9220–9228. [Google Scholar] [CrossRef] [PubMed]
  11. Klapp, I.; Mendlovic, D. Improvement of matrix condition of Hybrid, space variant optics by the means of Parallel Optics design. Opt. Express 2009, 17, 11673–11689. [Google Scholar] [CrossRef] [PubMed]
  12. Viallefont-Robinet, F.; Helder, D.; Fraisse, R.; Newbury, A.; Bergh, F.v.d.; Lee, D.H.; Saunier, S. Comparison of MTF measurements using edge method: Towards reference data set. Opt. Express 2018, 26, 33625–33648. [Google Scholar] [CrossRef] [PubMed]
  13. Hwang, H.; Choi, Y.W.; Kwak, S.; Kim, M.; Park, W. MTF assessment of high resolution satellite images using ISO 12233 slanteded-edge method. In Proceedings of the Image and Signal Processing for Remote Sensing XIV, Cardiff, UK, 15–18 September 2008; SPIE: Bellingham, WA, USA, 2008; Volume 7109, pp. 34–42. [Google Scholar] [CrossRef]
  14. Xu, M.; Cong, M.; Li, H. Research of on-orbit MTF measurement for the satellite sensors. In Proceedings of the Remote Sensing of the Environment: 18th National Symposium on Remote Sensing of China, Zhangjiajie, China, 20–23 October 2012; SPIE: Bellingham, WA, USA, 2012; Volume 9158, pp. 45–52. [Google Scholar] [CrossRef]
  15. Horiuchi, S.; Yoshida, S.; Yamamoto, M. Simulation of modulation transfer function using a rendering method. Opt. Express 2013, 21, 7373–7383. [Google Scholar] [CrossRef] [PubMed]
  16. Cheng, Y.; Yi, H.; Liu, X. Lunar-edge based on-orbit modulation transfer function (MTF) measurement. In Proceedings of the AOPC 2017: Space Optics and Earth Imaging and Space Navigation, Beijing, China, 4–6 June 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10463, pp. 432–439. [Google Scholar] [CrossRef]
  17. Masaoka, K.; Yamashita, T.; Nishida, Y.; Nishida, Y.; Sugawara, M. Modified slanteded-edge method and multidirectional modulation transfer function estimation. Opt. Express 2014, 22, 6040–6046. [Google Scholar] [CrossRef] [PubMed]
  18. Choi, T. IKONOS satellite on orbit modulation transfer function (MTF) measurement using edge and pulse method. Ph.D. Thesis, Electrical Engineering Department, South Dakota State University, Brookings, SD, USA, 2002. [Google Scholar]
  19. Kohm, K. Modulation transfer function measurement method and results for the Orbview-3 high resolution imaging satellite. In Proceedings of the ISPRS, Istanbul, Turkey, 12–23 July 2004; Volume 35, pp. 12–23. [Google Scholar]
  20. Storey, J.C. Landsat 7 on-orbit modulation transfer function estimation. In Sensors, Systems, and Next-Generation Satellites V; SPIE: Bellingham, WA, USA, 2001; Volume 4540, pp. 50–61. [Google Scholar] [CrossRef]
  21. Viallefont-Robinet, F.; Cansot, E. SPOT5 MTF measurement using biresolution images. In Proceedings of the Sensors, Systems, and Next-Generation Satellites VIII, Ottawa, ON, Canada, 13–16 September 2004; SPIE: Bellingham, WA, USA, 2004; Volume 5570, pp. 245–255. [Google Scholar] [CrossRef]
  22. Han, L.; Gao, K.; Dou, Z.; Wang, H.; Fu, X. On-orbit MTF estimation for GF-4 satellite using spatial multisampling on a new target. IEEE Geosci. Remote Sens. Lett. 2019, 17, 17–21. [Google Scholar] [CrossRef]
  23. Nelson, N.R.; Barry, P.S. Measurement of Hyperion MTF from on-orbit scenes. In Proceedings of the IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No. 01CH37217), Sydney, Australia, 9–13 July 2001; Volume 7, pp. 2967–2969. [Google Scholar] [CrossRef]
  24. Pan, L.; Hartley, R.; Liu, M.; Dai, Y. Phase-only image based kernel estimation for single image blind deblurring. In Proceedings of the IEEE on CVPR., Long Beach, CA, USA, 20 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 6034–6043. [Google Scholar] [CrossRef]
  25. Pan, J.S.; Hu, Z.; Su, Z.X.; Yang, M.H. L0-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 342–355. [Google Scholar] [CrossRef] [PubMed]
  26. Xu, L.; Jia, J.Y. Two-phase kernel estimation for robust motion deblurring. In Proceedings of the Computer Vision—ECCV 2010, Crete, Greece, 5–11 September 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 157–170. [Google Scholar] [CrossRef]
  27. Neel, J.; Szeliski, R.; Kriegman, D.J. PSF estimation using sharp edge prediction. In Proceedings of the IEEE Conference on CVPR., Anchorage, AK, USA, 23–28 June 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1–8. [Google Scholar] [CrossRef]
  28. Wang, Y.; Yin, W. Compressed Sensing via Iterative Support Detection. SIAM J. Imaging Sci. 2009, 4359, 462–491. [Google Scholar] [CrossRef]
  29. Viallefont-Robinet, F. Edge method for on-orbit defocus assessment. Opt. Express 2010, 18, 20845–20851. [Google Scholar] [CrossRef] [PubMed]
  30. Zehtabian, A.; Nazari, A.; Ghassemian, H.; Gribaudo, M. Adaptive restoration of multispectral datasets used for SVM classification. Eur. J. Remote Sens. 2015, 48, 183–200. [Google Scholar] [CrossRef]
Figure 1. Model of optical remote sensing camera imaging procedure.
Figure 1. Model of optical remote sensing camera imaging procedure.
Sensors 23 04362 g001
Figure 2. Remote sensing photos from the Jilin-1 and the corresponding kernels: (a) Farmland; (b) forests; (c) river; (d) sea; (e) gymnasium; (f) streets; (g) railways; (h) buildings.
Figure 2. Remote sensing photos from the Jilin-1 and the corresponding kernels: (a) Farmland; (b) forests; (c) river; (d) sea; (e) gymnasium; (f) streets; (g) railways; (h) buildings.
Sensors 23 04362 g002
Figure 3. The whole MTF acquisition process.
Figure 3. The whole MTF acquisition process.
Sensors 23 04362 g003
Figure 4. Imaging experiments: (a) Diagram of the experimental setup; (b) the target scene; (c) the knife edge.
Figure 4. Imaging experiments: (a) Diagram of the experimental setup; (b) the target scene; (c) the knife edge.
Sensors 23 04362 g004
Figure 5. Selected 12 sub-images.
Figure 5. Selected 12 sub-images.
Sensors 23 04362 g005
Figure 6. Kernels corresponding to each sub-image.
Figure 6. Kernels corresponding to each sub-image.
Sensors 23 04362 g006
Figure 7. Influence of image MTF level on measurement.
Figure 7. Influence of image MTF level on measurement.
Sensors 23 04362 g007
Figure 8. Impact of image noise on MTF measurement results.
Figure 8. Impact of image noise on MTF measurement results.
Sensors 23 04362 g008
Figure 9. Testing image captured by Jilin-1 satellite: (a) Calibration site; (b) knife-edge target.
Figure 9. Testing image captured by Jilin-1 satellite: (a) Calibration site; (b) knife-edge target.
Sensors 23 04362 g009
Figure 10. The 10 cloud-free sub-images with rich texture detail picked from the image acquired by Jilin-1 with a size of 500 × 500 pixels.
Figure 10. The 10 cloud-free sub-images with rich texture detail picked from the image acquired by Jilin-1 with a size of 500 × 500 pixels.
Sensors 23 04362 g010
Figure 11. Kernels of the remote sensing sub-images (10 cloud-free sub-images).
Figure 11. Kernels of the remote sensing sub-images (10 cloud-free sub-images).
Sensors 23 04362 g011
Table 1. MTFs measured by ISO12233 edge method.
Table 1. MTFs measured by ISO12233 edge method.
PositionValueAverage
MTFy20.2750.2930
30.311
MTFx10.2930.2945
40.296
Table 2. The 1 × 1-pixel EC of each sub-image kernel.
Table 2. The 1 × 1-pixel EC of each sub-image kernel.
No.123456
EC0.60170.61460.60400.60910.60710.5663
No.789101112
EC0.58200.59230.59030.58130.59030.6018
Table 3. MTFs obtained from RIK of each sub-image.
Table 3. MTFs obtained from RIK of each sub-image.
No.MTFyMTFx
10.306540.30689
20.296540.30514
30.280760.27833
40.293570.28511
50.281850.28691
60.238110.32868
70.243010.32773
80.259390.30106
90.247520.31009
100.235670.27820
110.281950.28091
120.278490.30007
Standard Deviation2.45%1.77%
Table 4. Average MTF of the two methods at the Nyquist frequency.
Table 4. Average MTF of the two methods at the Nyquist frequency.
ISO12233 EdgeOur MethodError
MTFy0.29300.27036.83%
MTFx0.29450.29911.56%
Table 5. MTF of the Jilin-1 satellite measured by ISO12233 edge method.
Table 5. MTF of the Jilin-1 satellite measured by ISO12233 edge method.
PlanMTFyMTFx
MTFy-1MTFy-2MTFx-1MTFx-2
57,6480.2400.2340.1340.168
57,6760.2160.2430.1430.147
57,7840.2440.2530.1320.153
Average0.23830.1462
Table 6. The 1 × 1-pixel EC of remote sensing sub-image kernels.
Table 6. The 1 × 1-pixel EC of remote sensing sub-image kernels.
No.12345No.
EC0.39930.44800.42260.28140.3993EC
No.678910No.
EC0.44910.46440.47210.47210.4634EC
Table 7. On-orbit MTFs obtained from the RIK of each sub-image.
Table 7. On-orbit MTFs obtained from the RIK of each sub-image.
No.MTFyMTFx
10.265770.15738
20.229070.13844
30.218450.16020
50.197900.14565
60.283220.14446
70.207290.15260
80.269530.13841
90.289010.14984
100.154770.12696
Standard Deviation4.51%1.04%
Table 8. On-orbit average MTF of the two methods.
Table 8. On-orbit average MTF of the two methods.
ISO12233 EdgeOur MethodError
MTFy0.23830.23501.38%
MTFx0.14620.14600.14%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Zhong, X.; Qu, Z.; Li, L.; Wu, S.; Zeng, C. On-Orbit Modulation Transfer Function Estimation Based on the Refined Image Kernel. Sensors 2023, 23, 4362. https://doi.org/10.3390/s23094362

AMA Style

Wang Y, Zhong X, Qu Z, Li L, Wu S, Zeng C. On-Orbit Modulation Transfer Function Estimation Based on the Refined Image Kernel. Sensors. 2023; 23(9):4362. https://doi.org/10.3390/s23094362

Chicago/Turabian Style

Wang, Yuanhang, Xing Zhong, Zheng Qu, Lei Li, Sipeng Wu, and Chaoli Zeng. 2023. "On-Orbit Modulation Transfer Function Estimation Based on the Refined Image Kernel" Sensors 23, no. 9: 4362. https://doi.org/10.3390/s23094362

APA Style

Wang, Y., Zhong, X., Qu, Z., Li, L., Wu, S., & Zeng, C. (2023). On-Orbit Modulation Transfer Function Estimation Based on the Refined Image Kernel. Sensors, 23(9), 4362. https://doi.org/10.3390/s23094362

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop