1. Introduction
As a promising and thriving imaging technique in modern ultrafast optical imaging systems, the time-stretch (TS) technique exploits the resolvability of ultrafast broadband laser pulses in both spectral and temporal domains and realizes the continuous data acquisition at an ultra-high frame rate, which encodes the spectral information into the temporal domain for ultrafast optical imaging based on a chromatic medium. The TS technique has already been comprehensively and extensively utilized in a variety of scientific, industrial, and biomedical applications, including optical coherence tomography (OCT) [
1,
2,
3,
4,
5], observation of fast dynamic phenomena [
6], biomedical tissue imaging of TS microscopy [
7,
8,
9], cell imaging and blood screening [
10,
11,
12], and compressive sensing (CS) optical imaging systems [
13,
14,
15,
16,
17,
18].
Serial time-encoded amplified microscopy (STEAM) is one typical TS imaging approach, with an unprecedented imaging speed of tens of hundreds of million frames per second [
19,
20,
21,
22], which conquers the bottleneck of the tradeoff between imaging speed and sensitivity. STEAM mainly contains two procedures. The first procedure is called wavelength/spectrum-to-time conversion, also called TS or dispersive Fourier transformation (DFT) or wavelength/spectrum-to-time one-to-one linear mapping, which is resulted from the application of temporal dispersive devices, such as long-distance single-mode fibers or dispersive compensating fibers (DCF) [
23].
Figure 1a shows the one-to-one linear mapping between wavelength and time. The other procedure is called space-to-wavelength/spectrum conversion, also known as the space-to-wavelength/spectrum one-to-one linear mapping, which is induced by the utilization of spatial dispersive devices, such as gratings or virtually imaged phased arrays (VIPAs) [
24,
25].
Figure 1b shows the one-to-one linear mapping between wavelength and space. From the above two procedures, a linear mapping between space (imaging information) and time (1D time serial information) is achieved.
Figure 1c shows the one-to-one linear mapping between time and space via the wavelength. By analyzing the 1D time serial information, the imaging information of the original image can be reconstructed.
As a result, the basic principle of STEAM is to map the spatial information into the spectrum of the detected optical imaging information, and then map the spectrally encoded information from the frequency domain to the time domain through a temporal dispersive medium to form a one-dimensional time serial data stream. The key advantage of this approach is that it can overcome the speed limit of traditional imaging methods to achieve ultrafast imaging. Recent research advances have shown that STEAM-based ultrafast imaging systems are constantly being optimized to improve image quality, enhance system stability, and expand their applications in a variety of comprehensive fields. Due to the feature of ultrafast optical imaging with continuous throughput, STEAM has been successfully employed in ultrafast optical imaging [
26,
27], deep learning and classification [
18], biomedical imaging [
8,
9,
10,
11,
12,
28], and CS optical imaging [
13,
14,
15,
16,
17,
18,
29].
Despite its ultrafast imaging capabilities, the STEAM-based ultrafast imaging systems still face some challenges in practical applications. For example, to process the large amount of data generated during ultrafast imaging, efficient signal processing algorithms are needed. There are two approaches to compress the data and mitigate the large amount of data. The first approach is using the CS method [
13,
14,
15,
16,
17,
18,
28,
29,
30,
31,
32]. CS is an emerging information processing technology, which takes advantage of the sparsity of imaging information and realizes the data acquisition and processing of high-resolution information by collecting far fewer measurements than required by the Nyquist sampling theorem [
28,
29]. The core idea of CS is to reconstruct the original information based on the sparsity of the information by using the number of sampling points far below the Nyquist sampling rate. However, the CS needs corresponding systematic configuration and complicated information processing algorithms, which will increase the cost and complexity of the ultrafast imaging system [
30,
31,
32]. The other approach is anamorphic transformation (AT) [
33,
34], which is also known as anamorphic stretch transform (AST) or warped stretch transform (WST). AT is an advanced optical data compression technique that realizes real-time imaging compression by nonlinear time-domain stretching [
35]. The AT technique is a variant of TS imaging, which achieves real-time optical compression of images by deliberately applying highly nonlinear temporal dispersion over a linear frequency-to-time mapping process while maintaining fast imaging speeds. Unlike CS, AT does not require featured detection and iterative detection, which makes it a potential candidate when dealing with large amounts of data. Furthermore, the nonlinear mapping between wavelength/spectrum to time could improve the group delay-related resolution, which gives more details about the target images.
Thus, in this work, we propose the ultrafast optical imaging system with AT based on the STEAM structure. AT is implemented by using a designed dispersive device such as a chirped fiber Bragg grating (CFBG) with a nonlinear group delay profile. Our proposal improves the group delay-related resolution while at the same time maintaining the same line scanning speed, which lays a solid foundation for high-speed, high-throughput, and data-efficient imaging.
2. Methods
Thanks to the utilization of dispersive devices with nonlinear group delay, AT could achieve a nonlinear mapping between wavelength and time. The nonlinear nature of this transformation provides new possibilities for data compression both in amplitude and phase operations. In the field of imaging data compression, the application of AT is mainly reflected in the realization of data compression by increasing the spatial coherence of images. And this compression is achieved by the mathematical reconstruction of the image, rather than by modifying the sampling process. AT technology has a wide range of potential applications in areas such as industrial and biomedical imaging, providing high-resolution high-speed imaging while reducing the required data storage and processing requirements through optical data compression [
36,
37]. Hence, the development of AT technology provides a new solution for high-speed high-throughput imaging at the same time with the benefit of optical data compression.
Figure 2 shows the schematic of AT imaging compared with normal TS imaging. A spatial dispersive device is employed to convert the one-to-one mapping between the space and spectrum/wavelength of the broadband optical pulse. The spectrum of the broadband optical pulse is transformed into a rainbow to illuminate the image. Then, the image information is encoded into the spectrum due to the usage of a spatial dispersive device. In normal TS imaging, a temporal dispersive device with a linear group delay profile such as DCF is applied to achieve the linear mapping between time and spectrum/wavelength. Hence, the uniform space/spectrum/time of one-to-one-to-one mapping is achieved.
Figure 2a illustrates the uniform spectrum of rainbow pulse illuminating the image.
Figure 2c describes the one-to-one linear mapping between time and spectrum/wavelength when the broadband optical pulse illuminates the sample. The drawback of the approach is that it oversamples the sparse marginal area of the image with superabundant data while undersampling the rest of the imaging area.
In contrast, another temporal dispersive device with a nonlinear group delay profile such as CFBG is utilized to perform the nonlinear mapping between time and spectrum/wavelength. Therefore, AT reshapes data by nonlinear stretching in the temporal domain, and its output has properties that facilitate data compression and data analysis.
Figure 2b illustrates the unevenly distributed spectrum of the rainbow pulse illuminating the image.
Figure 2d depicts the one-to-one nonlinear mapping between time and spectrum/wavelength when the broadband optical pulse illuminates the sample.
Figure 2d indicates that the dense information region is designed with a low group delay, hence more wavelength information is employed to describe the dense information region. As a result, to match the sparsity of the image, the temporal dispersive device is designed with a proper group delay. Hence, in the AT imaging system, the intensive image area requires high-resolution sampling while the sparse marginal area can put up with low-resolution sampling.
Figure 3 depicts the implementation of AT in an ultrafast optical imaging system using a CFBG and the results of linear-stretched and nonlinear-stretched optical pulses in the temporal domain and in the frequency domain. The CFBG with a nonlinear group delay profile is employed to obtain the nonlinear one-to-one mapping between time and spectrum/wavelength. The broadband femtosecond optical pulses produced from a mode-locked laser (MLL) are linearly stretched in the temporal domain after passing through the DCF. Then, the optical pulses go through the circulator 1 from port 1 to port 2 and then reflect back to port 3 of the circulator 1 via the CFBG, where the optical pulses are nonlinear-stretched. The nonlinear-stretched optical pulses then go through the circulator 2 from port 1 to port 2 and then reach the target image after passing through the collimator. Then, the optical pulses are reflected by the target image and are returned to the circulator 2 via the collimator from port 2 to port 3. Finally, the returned optical pulses are detected by the photodetector (PD).
Figure 3b describes a linear-stretched optical pulse with the usage of DCF at port 1 of the circulator 1 in the temporal domain, and
Figure 3d states a nonlinear-stretched optical pulse using a CFBG at port 3 of the circulator 1 in the temporal domain. From
Figure 3b,d, the full width at half maximum (FWHM) of the optical pulse in the temporal domain is around 7 ns, and the optical pulse with nonlinear stretch in
Figure 3d has a low group delay of around 7 ns~10 ns and a high group delay of around 10 ns~14 ns, while the linear-stretched optical pulse has a uniform group delay in
Figure 3b.
Figure 3c,e show the fast Fourier transformation of the pulses in
Figure 3b,d in the frequency domain, respectively. From
Figure 3c,e, the Fourier domain of the linear and nonlinear time-stretched pulses in frequency are slightly different in the low frequency region.
In the proposed imaging system, the temporal resolution is determined by the repetition rate of the MLL, while the spatial resolution is a comprehensive combination of spatial dispersion (diffraction grating-determined) limited spatial resolution, diffraction (lens-determined) limited spatial resolution, dispersion-induced time stretch through stationary phase approximation (SPA) limited spatial resolution, also known as group delay-related spatial resolution, and the temporal resolution of the digitizer limited spatial resolution [
22].
The optical field at port 1 of the circulator 1 is expressed as [
38]:
where
is the dispersion of the DCF, ω is the optical angular frequency, G(ω) is the optical spectrum of the pulse. Without considering the CFBG, the optical field after the target image can be shown as:
where
is the imaging information encoded to the spectrum of the optical pulse. When considering the CFBG, the CFBG can be regarded as a phase filter in the spectral domain. The optical field before the PD is expressed by:
where
is the Fourier transform of
. The phase response of the CFBG, also known as
, is related to the group delay profile due to
. And the optical pulse
in temporal domain is calculated by the inverse Fourier transform of
.
3. Experiment and Results
The schematic of the proposed ultrafast optical imaging system with AT is shown in
Figure 4. A MLL with an average power of 10 dBm and a repetition rate of 50 MHz is utilized as the laser source to generate the broadband femtosecond optical pulses with a duration of less than 80 fs and a center wavelength of around 1550 nm. The femtosecond optical pulses with a FWHM of 15 nm in the spectrum domain go through the DCF with a dispersion of −0.48 ns/nm, where the femtosecond optical pulses are time-stretched into a nanosecond level, and the linear one-to-one mapping between wavelength/spectrum to time is achieved. Hence, the wavelength/spectrum information is encoded into the time serial. The time serial-encoded and time-stretched optical pulses then pass through the circulator 1 (Cir1) from port 1 to port 2 and then reflect back to port 3 of the circulator via the CFBG, where the optical pulses are nonlinear-stretched. The optical pulses are reshaped by the CFBG, which has a nonlinear group delay profile. The reshaped optical pulses then go through the circulator 2 (Cir2) from port 1 to port 2, emit into open space via a collimator (Coll), and then go through a pair of diffraction gratings (DG1 and DG2), which have a groove density of 600 lines per millimeter. When using two DGs at a parallel setup, the rainbow pulses will be collimated (shown in
Figure 4; the tilted angle of the rainbow pulses indicating that different wavelengths of the rainbow pulses reach the sample at different times), which is easy for the following light adjust and imaging. DG acts as the spatial dispersive device in the proposed imaging system. Thanks to the utilization of DGs, a linear one-to-one mapping between spectrum/wavelength and space is obtained.
A telescope consisting of two plano-convex lenses with focal lengths of 150 mm (PL1) and 50 mm (PL2) is put after the DG2 with distances of 100 mm and 300 mm, ensuring the miniature optical pulses focusing on the imaging plane. A 1952 USAF resolution target acting as the sample is put in the focusing plane for imaging. Then, the miniature optical pulses are reflected by the sample and returned to the previous route. The red arrows show the forward propagation of optical pulses and the green arrows show the backward propagation of the returned optical pulses. The returned optical pulses pass through the collimator and circulator 2 from port 2 to port 3 and then reach the PD. The PD with a bandwidth of 10 GHz is employed to detect the optical information. An oscilloscope with a bandwidth of 10 GHz and a sampling rate of 20 GS/s is applied to real-time display the detected information. A synchronization signal from the MLL and the detected imaging information are sent to the computer for the following data processing and image reconstruction. To achieve 2D imaging, the sample is manually adjusted in the vertical direction in our proposal. To increase the 2D imaging speed, a galvo meter can be employed to adjust the vertical position of the sample in the proposed imaging system. Another method of achieving 2D imaging is to change the spatial dispersive device, using a VIPA and a DG instead of using only a DG. The combination of a DG and VIPA is an intrinsic 2D imaging approach [
25].
The experimental results of reconstructed images using the normal TS technique and the proposed AT approach are described in
Figure 5.
Figure 5a shows the original standard 1951 USAF resolution target with an imaging region marked in the green square, which has an imaging field of view (FOV) of 2.4 mm by 6.4 mm and a pixel size of 120 × 320.
Figure 5b,c show the results of reconstructed 2D images using the normal TS technique and the proposed AT approach, respectively.
Figure 5b indicates that the uniform group delay of temporal dispersive devices used in the TS technique produces an identical imaging result to the original resolution target. In contrast,
Figure 5c shows an anamorphic image with more details in the dense information region (group 0, number 3) and coarse information in the spare area of the target image thanks to the utilization of the nonuniform group delay of temporal dispersive devices, which is specially designed as a low group delay in the dense information region and a high group delay in the rest of the imaging area. Compared with
Figure 5b,
Figure 5c has improved group delay-related resolution by 58% [
22] and more acquired data show the dense information region of group 0, number 3, which could give more details about the edges.
To depict more detailed information about the reconstructed images, the reconstructed experimental results of row 63 (blue line) and row 93 (red line) in
Figure 5b,c are described in
Figure 6a,b, respectively.
Figure 6 shows the reconstructed experimental results of line scanning of row 63 (blue line) and row 93 (red line) using the TS technique in
Figure 5b and the AT approach in
Figure 5c. The spatial resolution in TS imaging is uniform, which is around 67 μm by measuring the point spread function (PSF) of a sharp edge [
22] (shown in
Figure 6a of point M). However, the spatial resolution in AT imaging is nonuniform. Although the sampling rates are the same in both AT imaging and TS imaging, thanks to the nonlinear mapping between the wavelength and the time using CFBG, after linear space-to-wavelength mapping using the spatial dispersive device (DG), nonlinear mapping between time and space is achieved. As a result, the reconstructed image is nonlinear-stretched compared with the original image. And the spatial resolutions at different positions of the image are varied (maybe difficult to discern with the naked eye due to the distorted image). Based on the PSF of a sharp edge, the average spatial resolutions at point N and point P in
Figure 6b are 43 μm and 75 μm, respectively. However, sometimes the specific resolution is not mentioned due to the different spatial resolutions of varied positions in the AT imaging. The result of the anamorphic stretch in
Figure 6b implies that the group delay-related resolution increases by 58% and more acquired data are employed to describe the changes of the line scanning in one third of the very beginning. Also, the result of
Figure 6 illustrates that the AT technique is a promising approach for data compression without iterative detection and complicated imaging system setup.
4. Discussion
In our experiment, the settings of low group delay profiles in regions of dense information and high group delay profiles in other regions is based on the images, for the positions of dense and sparse regions of images are different. Hence, the most common operation is to perform TS imaging before AT imaging. The utilization of TS imaging could give the reconstructed uniform image of the original image, which tells the dense and sparse regions apart. Then, AT imaging could be performed using correctly chosen CFBG with properly designed group delay profiles. Hence, the AT imaging has great potential in ultrafast imaging that is associated with mitigating big data and tunable resolution. In comparison with TS imaging, AT imaging has the limitation of more complexed settings.
Table 1 shows the comparison of TS imaging and AT imaging in terms of data compression efficiency, imaging speed, resolution enhancement capability, system complexity, and cost.
Table 1 clearly shows the advantages and disadvantages of TS imaging and AT imaging. And, from
Table 1, it is easy to reach the conclusion that AT imaging has the benefit of data compression and enhancing resolution, which TS imaging is not capable of.
Moreover, the design of CFBG is crucial in the proposed AT imaging. The design of CFBG has linear chirp profiles, nonlinear chirp profiles, and custom-designed chirp profiles. The customer-designed chirp profiles are tailored to specific applications, which would be a future trend. The possible operation of AT imaging with unknown target images is to perform the TS imaging to confirm the dense and sparse imaging region and then add the CFBG with a properly designed group delay to achieve AT imaging. Furthermore, the other emerging technologies such as deep learning and pattern recognition are easy to combine with AT imaging for imaging analysis.