Next Article in Journal
Novel Microfluidics Device for Rapid Antibiotics Susceptibility Screening
Previous Article in Journal
The Effects of Photobioreactor Type on Biomass and Lipid Production of the Green Microalga Monoraphidium pusillum in Laboratory Scale
 
 
Article
Peer-Review Record

Analysis, Simulations, and Experiments for Far-Field Fourier Ptychography Imaging Using Active Coherent Synthetic-Aperture

Appl. Sci. 2022, 12(4), 2197; https://doi.org/10.3390/app12042197
by Mingyang Yang 1,2, Xuewu Fan 1, Yuming Wang 1,2 and Hui Zhao 1,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Appl. Sci. 2022, 12(4), 2197; https://doi.org/10.3390/app12042197
Submission received: 13 January 2022 / Revised: 3 February 2022 / Accepted: 17 February 2022 / Published: 20 February 2022

Round 1

Reviewer 1 Report

Review of

Analysis, simulations, and experiments for far-field Fourier ptychography imaging using active coherent synthetic-aperture

 

Authors

Mingyang Yang , Xuewu Fan , Yuming Wang and Hui Zhao

 

REVIEWER COMMENTS

 

The paper presents some interesting results on the use of Fourier ptychography with camera scanning to have an improved resolution. However, this reviewer could not fully understand the methodology used since the paper lacks clarity. The figures, the text and the equations are not coherent with each other. The symbols used in the figures are different than the symbols used in the text and this makes it very difficult to follow. Many parts of the setup, such as apertures, distances, or lenses are referred to with different names throughout the paper making it difficult to follow!

For this reason I suggest publication only after the Authors have applied some modifications to make the paper clearer.

In the following I present my comment on specific sections of the manuscript:

Abstract

The authors claim that they "advance the application of FP to far-field imaging" using camera scaning to improve spatial resolution. This improvement is obtained as follows:

  • 1) deriving a far-field imaging model that compensate for a theoretical gap of previous research

at this point of the paper it is not clear what this gap is? Is it a gap in deriving the far field model? Has no far-field model been derived before?

 

  • 2) building an experimental setup to demonstrate relationship between spectral overlap ratio and the reconstructed high resolution image. The simulations and measurements show that a ratio >50% has a good effect on reconstruction

 

  • 3) use of partition reconstruction method to avoid inconsistency of aberrations.

Here it is not clear what the "partition reconstruction" is! And what is meant by "inconsistency of aberrations".

 

The Authors claim they obtained a resolution gain of x4 on group 25 with 40 um line width of a specific target (GBA1) and that their study extends the working distance of the far-field FP imaging with an improvement of spatial resolution.

 

In this last statement it is not clear how much is the working distance extended, and what sort of improvement is obtained with respect to state of the art methods.

 

2. Analysis of the macroscopic FP principle

 

The following sentense is not correct English: "The coherent light source and camera sensor were located on opposite side of the target".

Perhaps the Authors want to say that the light source and the camera sensor were located on opposite sides with respect to the target?

Figure 1 is not clear enough! I would recomend the Authors to consider the following points:

- First of all the scanning path of the camera is not visible! And this is the most important feature of this study, so it should be highlighted!

- Second of all there is no description of the setup. The setup is described only in the caption of the figure and the description is not sufficient! For example, why is there a filter? What type of light source is it? From the figure it looks like a point source but no information is given. The Authors only specify that it is a point source later in the paper, but the description of a figure should come before the figure is displayed and not after! otherwise it makes it difficult and time consuming to follow!

 

2.1 Far-field Fraunhofer approximation

On line 118 the Authors refer to Fig. 1 and to an object, but in Fig. 1 the object is not called object! The text and figures should make use of the same nomenclature!

 

On lines 120 and 121 the Authors refer to Z as the distance between the "object" and the "pupil plane", but they do not specify which pupil plane!! the entrance pupil or the exit pupil? Furthermore the pupil planes and the object are not pointed out in Fig.1 (a) or (b).

 

Then on the same lines 120 and 121 the define d as the diameter of the "entrance pupil aperture of the imaging lens" but they never specified where this pupil is or what is refered to be the imaging lens! Furthermore, in Fig 1 the two lenses are called focusing lens and photographic lens, but in the text the second lens is called differently (see line 129). The Authors should use the same names in the figures and in the text.

 

The distance Z introduced in Eqs. (1 and 2) and in the text on line 120, is not indicated in Figure 1! In this figure we have Z1, Z2 and Z3...but no Z!!! very confusing and furthermore the resolution of Figure 1 is very poor since the symbols are hardly readable.

 

I would strongly suggest to update Fig 1 so that it contains the same information as in the text and has readable symbols. And to re-write this entire paragraph in a more clear and understandable way.

 

2.2 Forward model

The rest of this section is not written in a clear way. I would suggest re-writing this section. Things are refered to with different names throughout the text making it very hard to follow! For example on line 135 the Authors refer to h(u,v:x,y) as the "amplitude field generated by the point source", and (u,v) as the amplitude distribution of the point source. Then on line 138 h(u,v:x,y) is refered to as a pulse function.

Then on line 143 the Authors use a sentence that is not correct in the English language:

The field distribution after the object passes through the lens is given as

It is hard to understand what the Authors mean, since an object cannot pass through a lens! Maybe they mean that the light transmitted through the "target" which they call "object" passes through the lens...

 

On line 144 the symbol d is used as the diameter of a lens, while before it was used to indicate the diameter of the pupil. Does the pupil coincide with the lens? this should be described at the beginning.

 

All these small details make the paper difficult to follow.

 

I would suggest to describe the setup and the method at the beginning of the paper, in section 2, when the Authors show the main figure (Fig.1). In this section the Authors should describe everything, i. e. What type of source, what is the filter for, what is the "focusing lens" for? Why do they call it focusing lens? And also use the same symbols that are used in the text, like Z, or d. Indicate the pupils that are then discussed in the text.

If this is done then in section 2.2 it will not be necessary to describe all the symbols again and the section will become much clearer.

 

In this section the Authors use the following symbols:

Ui --> intensity distribution of the point light source (line 133)

Ul --> field distribution on the near-front surface of the focusing lens(line 140-142)

Ul' --> field distribution on the near-back surface(line 142)

Up --> field distribution after the object passes through the lens (line 143) , this is not expressed in correct English as stated before. Furthermore this sysmbol is then defined on line 169 as "The amplitude distribution through an object is defined as Up".. very confusing!

Ug --> NOT DEFINED

Ui' --> the light field distribution on the camera sensor(line 167). This is confusing since with Ui the Authors previously indicated the distribution of the light source!

Uimg --> the ideal image of the geometrical optics (line 173)

 

This section is full of formulas and it becomes very difficult to follow if each symbol is not precisely defined before being used!

 

4 Experiments

4.2 Quality control and error correction

 

The Authors say there are two main components of the noise: 1)Dark current 2) photon noise, but the effect of the photon noise is not discussed. Furthermore the Authors do not say why they consider the read noise to be negligible!

In order to understand what is the contribution of each source of noise one needs to know the exposure time, the full well capacity, the quantum efficiency at the considered wavelength, but none of this information is given.

 

Furthermore, the figure does not have labels for the x axis and y axis!

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Reviewer 2 Report

The authors described research on a Fourier Ptychogrpahy method adapted for "long-distance imaging".

The work is very interesting and merits publication.

In my opinion, there are howewer many concerns that should be eventually taken into account, or that at least require a double-check:

  1. Why do we need "long-distance imaging"? Is it something that we want or something we are striving to fight? From the paper Holloway et al it seems something one wants. Here, from my understanding it is the second. What is the point of reducing the observation distance with a lens if you want "long-range distance"?
  2. The "featured application" should be one sentence, not something bigger than the abstract. I can say that is also very difficult to read and comprehend. You should reduce its length considerably and make it way more readable and understandable.
  3. In the abstract, you mention group 25 is the name you gave to your test pattern. How someone could understand it before reading the paper?
  4. In line 43 the paper starts completely wrong: Fourier Ptychography (a technique) is not an extension of the ptychography iterative engine (a particular algorithm in the wild forest of ptychography algorithms).
  5. Similarly, in line 47 you say that FP is a phase retrieval algorithm. Well, I can understand what you're saying and I can also somehow agree (if I think that ptychography belongs to the phase retrieval methods), but it is an unfortunate sentence.
  6. Line 55: from my understanding scanning aperture is not the key to 3D imaging. It is a different computational method that takes into account the 3D structure that allows to have it.
  7. I also don't completely agree between the separation of angular illumination and sequential scanning: in the Fourier plane, it's the same thing as you effectively scan different portions of the space with the shifting pupil function.
  8. I also don't agree to saying that the overlap factor has not been formally studied. Check some papers of 2013 but also followup works of Zhang.
  9. Line 82: well, saying that Fraunhofer propagation has not been deeply examined in theory is a bit too much.... I should rephrase this paper objective...
  10. Regarding the forward model part, I would follow a top-bottom approach, describing at first the general view, maybe with an operator notation, and secondly the detail part. Do we really need all those equations written like that? I think we have just:
    • an illumination function at a plane. If you want to describe it rigorously call it P and define it as a second equation. But it is just a low-frequency complex-valued aperture function multiplied by an exponential phase factor.
    • the interaction between the sample and the illumination O*P, an elementwise product.
    • then we have a propagation till the objective lens: we also have to apply the quadratic phase factor of the second lens at the wave calculated at this plane.
    • Finally, the Fourier transform allows having the intensity of a natural image on the detector plane.
  11. Actually, the entire process is just what happens in the normal Fourier Ptychography with a microscopy lens, or not?? The only difference is that you are scanning the scattered field with the detector with some overlap, obtaining virtually a high Numerical Aperture detector image.
  12. You should add colour bars and scales in each reconstructed image...at least in the real data.
  13. You can add a figure that shows how the dataset appears for many acquisitions. Or is it figure 11?
  •  

 

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Back to TopTop