Next Article in Journal
An Analytical Method for Gas Flow Measurement Using Conservative Chemical Elements
Next Article in Special Issue
An Enhanced Photonic Quantum Finite Automaton
Previous Article in Journal
Efficient Management of Power Losses from Renewable Sources Using Removable E.V. Batteries
Previous Article in Special Issue
Conditional Measurements with Silicon Photomultipliers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Towards Quantum 3D Imaging Devices

by
Cristoforo Abbattista
1,
Leonardo Amoruso
1,
Samuel Burri
2,
Edoardo Charbon
2,
Francesco Di Lena
3,
Augusto Garuccio
3,4,
Davide Giannella
3,4,
Zdeněk Hradil
5,
Michele Iacobellis
1,
Gianlorenzo Massaro
3,4,
Paul Mos
2,
Libor Motka
5,
Martin Paúr
5,
Francesco V. Pepe
3,4,*,
Michal Peterek
5,
Isabella Petrelli
1,
Jaroslav Řeháček
5,
Francesca Santoro
1,
Francesco Scattarella
3,4,
Arin Ulku
2,
Sergii Vasiukov
3,
Michael Wayne
2,
Claudio Bruschini
2,†,
Milena D’Angelo
3,4,*,†,
Maria Ieronymaki
1,† and
Bohumil Stoklasa
5,†
add Show full author list remove Hide full author list
1
Planetek Hellas E.P.E., 44 Kifisias Avenue, 15125 Marousi, Greece
2
Ecole Polytechnique Fédérale de Lausanne (EPFL), 2002 Neuchâtel, Switzerland
3
INFN, Sezione di Bari, 70125 Bari, Italy
4
Dipartimento Interateneo di Fisica, Università degli Studi di Bari, 70126 Bari, Italy
5
Department of Optics, Palacký University, 17. Listopadu 12, 77146 Olomouc, Czech Republic
*
Authors to whom correspondence should be addressed.
Equal last author contribution.
Appl. Sci. 2021, 11(14), 6414; https://doi.org/10.3390/app11146414
Submission received: 5 May 2021 / Revised: 6 July 2021 / Accepted: 7 July 2021 / Published: 12 July 2021
(This article belongs to the Special Issue Basics and Applications in Quantum Optics)

Abstract

:
We review the advancement of the research toward the design and implementation of quantum plenoptic cameras, radically novel 3D imaging devices that exploit both momentum–position entanglement and photon–number correlations to provide the typical refocusing and ultra-fast, scanning-free, 3D imaging capability of plenoptic devices, along with dramatically enhanced performances, unattainable in standard plenoptic cameras: diffraction-limited resolution, large depth of focus, and ultra-low noise. To further increase the volumetric resolution beyond the Rayleigh diffraction limit, and achieve the quantum limit, we are also developing dedicated protocols based on quantum Fisher information. However, for the quantum advantages of the proposed devices to be effective and appealing to end-users, two main challenges need to be tackled. First, due to the large number of frames required for correlation measurements to provide an acceptable signal-to-noise ratio, quantum plenoptic imaging (QPI) would require, if implemented with commercially available high-resolution cameras, acquisition times ranging from tens of seconds to a few minutes. Second, the elaboration of this large amount of data, in order to retrieve 3D images or refocusing 2D images, requires high-performance and time-consuming computation. To address these challenges, we are developing high-resolution single-photon avalanche photodiode (SPAD) arrays and high-performance low-level programming of ultra-fast electronics, combined with compressive sensing and quantum tomography algorithms, with the aim to reduce both the acquisition and the elaboration time by two orders of magnitude. Routes toward exploitation of the QPI devices will also be discussed.

1. Introduction

Fast, high-resolution, and low-noise 3D imaging is highly required in the most diverse fields, from space imaging to biomedical microscopy, security, industrial inspection, and cultural heritage [1,2,3,4,5]. In this context, conventional plenoptic imaging represents one of the most promising techniques in the field of 3D imaging, due to its superb temporal resolution: 3D imaging is realized in a single shot, for seven frames per second at 30 M pixel resolution, and 180 frames per second for 1M pixel resolution [5]; no multiple sensors, near-field techniques, time-consuming scanning or interferometric techniques are required. However, conventional plenoptic imaging entails a loss of resolution which is often unacceptable. Our strategy to break such limitation consists of combining a radically new and foundational approach with last-generation hardware and software solutions. The fundamental idea is to exploit the information stored in correlations of light by using novel sensors and measurement protocols to implement a very ambitious task: high speed (10–100 fps) quantum plenoptic imaging (QPI) characterized by ultra-low noise and an unprecedented combination of resolution and depth-of-field. The developed imaging technique aims at becoming the first practically usable and properly “quantum” imaging technique that surpasses the intrinsic limits of classical imaging modalities. In addition to the foundational interest, the quantum character of the technique allows for extracting information on 3D images from correlations of light at very low photon fluxes, thus reducing the scene exposure to illumination. The interest in QPI is specially motivated by the potential advantages with respect to other established 3D imaging techniques. Actually, other methods require, unlike QPI, either delicate interferometric measurements, as in digital holographic microscopy [6,7], or phase retrieval algorithms, as in Fourier ptychography [8], or fast pulsed illumination, as in time-of-flight (TOF) imaging [9,10,11,12,13,14]. Moreover, QPI provides the basis of a scanning-free microscopy modality, overcoming the drawbacks of the confocal methods [15].
In view of the deployment of quantum plenoptic cameras suitable for real-world applications, the crucial challenge is represented by the reduction of both the acquisition and data elaboration times. In fact, a typical complication arises in quantum imaging modalities based on correlation measurements: the reconstruction of the correlation function encoding the desired image requires collecting a large number of frames (30,000–50,000 in the first experimental demonstration of the refocusing capability of correlation plenoptic imaging [16]), which must be individually read and stored before elaborating the output. Therefore, to get an estimate of the total time required to form a quantum plenoptic image, the data reading and transmission times must be added to the acquisition time of the employed sensor. This problem is addressed by an interdisciplinary approach, involving the development of ultrafast single-photon sensor systems, based on SPAD arrays [17,18,19,20,21,22], the optimization of circuit electronics to collect and manage the high number of frames (e.g., by GPU) [23,24], the development of dedicated algorithms (compressive sensing, machine learning, quantum tomography) to achieve the desired SNR with a minimal number of acquisitions [25,26,27,28]. Finally, the performances of QPI will be further enhanced by a novel approach to imaging based on quantum Fisher information [29,30]. Treating the physical model of plenoptic imaging in the view of quantum information theory brings new possibilities of improving the setup towards super-resolution capability in the object 3D space. Having the optimal set of optical setup parameters enables object reconstruction close to ultimate limits set by nature. In this article, we will combine a review of the state of the art in the aforementioned fields with the discussion on how they contribute to the development of QPI and its application.
The paper is organized as follows: in Section 2, we discuss the working principle and recent advances of Correlation Plenoptic Imaging (CPI), a technique that represents the direct forerunner of QPI; in Section 3, we present the hardware innovations currently investigated to reduce the acquisition times in CPI; in Section 4, we review the algorithmic solutions to improve QPI; in Section 5, we outline the perspectives of our future work in the context of Qu3D project; in Section 6, we discuss the relevance of our research. The work presented in this paper involves experts from three scientific research institutions, Istituto Nazionale di Fisica Nucleare (INFN, Italy), Palacky University Olomouc/Department of Optics (UPOL, Czechia), and Ecole polytechnique fédérale de Lausanne/Advanced Quantum Architecture Lab (EPFL, Switzerland), and from the industrial partner Planetek Hellas E.P.E. (PKH, Greece). The activity is carried within the project “Quantum 3D Imaging at high speed and high resolution” (Qu3D), founded by the 2019 QuantERA call [31].

2. Plenoptic Imaging with Correlations: From Working Principle to Recent Advances

Quantum plenoptic cameras promise to offer the advantages of plenoptic imaging, primarily ultrafast and scanning-free 3D imaging and refocusing capability, with performances that are beyond reach for the classical counterpart. State-of-the-art plenoptic imaging devices are able to acquire multi-perspective images in a single shot [5]. Their working principle is based on the simultaneous measurement of both the spatial distribution and the propagation direction of light in a given scene. The acquired directional information translates into refocusing capability, augmentable depth of field (DOF), and parallel acquisition of multi-perspective 2D images, as required for fast 3D imaging.
In state-of-the-art plenoptic cameras [32], directional detection is achieved by inserting a microlens array between the main lens and the sensor of a standard digital camera (see Figure 1a). The sensor acquires composite information that allows identification of both the object point and the lens point where the detected light is coming from. However, the image resolution decreases with inverse proportionality to the gained directional information, for both structural (use of a microlens array) and fundamental (Gaussian limit) reasons; plenoptic imaging at the diffraction limit is thus considered to be unattainable in devices based on simple intensity measurement [5].
Recently, the INFN group involved in Qu3D has proposed a novel technology, named Correlation Plenoptic Imaging (CPI), that enables overcoming the resolution drawback of current plenoptic devices, while keeping their advantages in terms of refocusing capability and 3D reconstruction [1,16,33]. CPI is based on either intensity correlation measurement or photon coincidence detection, according to the light source: actually, CPI can be based on the spatio-temporal correlations characterizing both chaotic sources [16,33] and entangled photon beams [34] to encode the spatial and directional information on two disjoint sensors, as shown in Figure 1b. CPI with chaotic light is based on the measurement of the correlation function
Γ ( ρ a , ρ b ) = I a ( ρ a ) I b ( ρ b ) I a ( ρ a ) I b ( ρ b ) ,
where denotes the average on the source statistics, and I j ρ j ( j = a , b ) are the intensities propagated by the beam j and registered in correspondence of point ρ j on the sensor D j . Experimentally, the statistical averages are replaced by time averages, obtained by retrieving a collection of frames, simultaneously acquired by the two detectors. In CPI devices, the correlation function encodes combined information on the distribution of light on two reference planes, one of which corresponds to the “object plane” that would be focused on the sensor in a standard imaging setup, placed at a distance s o from the focusing element. In general, given an object placed at a distance s from the focusing element, and characterized by the light intensity distribution A ( ρ ) , its images are encoded in the function Γ ( ρ a , ρ b ) , in the geometrical-optics limit, as
Γ ( ρ a , ρ b ) A n s s o ρ a M + 1 s s o ρ b M L ,
where M and M L are the magnifications of the images of the reference object plane and of the focusing element, respectively, while the power n is equal to 1 or 2, according to whether the object lies in only one [16,33,35] or both [36,37] optical paths.
Experimental CPI based on pseudo-thermal light is shown in Figure 2c, where both the acquired out-of-focus image and the corresponding refocused image are shown [16]. It was demonstrated in this proof of principle that CPI is characterized by diffraction-limited resolution on the object plane focused on the sensor. Details on the resolution limits are shown in Figure 2a, where one can observe an even more striking effect: thanks to its intrinsic coherent nature, CPI enables an unprecedented combination of resolution and DOF [16]. However, the low-noise sCMOS camera employed in the experiment, working at 50 fps at full resolution, requires several minutes to acquire 30,000 frames used to reconstruct the plenoptic correlation function, and a standard workstation has taken over 10 h for elaborating the acquired data and perform refocusing. The resulting image was also rather noisy, due to the well-known resolution vs. noise compromise of chaotic light ghost imaging that keeps also affecting CPI [35].
We are addressing these issues by employing two kinds of sources:
  • Chaotic light sources, such as pseudothermal light, natural light, LEDs and gas lamps, and even fluorescent samples, operated either in the high-intensity regime or in the “two-photon” regime, in which an average of two photons per coherence area propagates in the setup. Chaotic light sources are well known to be characterized by EPR-like correlations in both momentum and position variables [38,39], to be exploited in an optimal way to retrieve an accurate plenoptic correlation function in the shortest possible time. In order to efficiently retrieve spatio-temporal correlations, tight filtering of the source can be necessary to match the filtered source coherence time with the response time of the SPAD arrays that can be as low as 1 ns. Alternatively, pseudorandom sources with a controllable coherence time, made by impinging laser light on a fast-switching digital micromirror device (DMD), can be employed. Interestingly, recent studies have shown that, in the case of chaotic light illumination, the plenoptic properties of the correlation function do not need to rely strictly on ghost imaging: correlations can be measured between any two planes where ordinary images (see Figure 1b) are formed [35]. This discovery has led to the intriguing result that the SNR of CPI improves when ghost imaging of the object is replaced by standard imaging [40]. In particular, excellent noise performances are expected in the case of images of birefringent objects placed between crossed polarizers. This kind of source is particularly relevant in view of applications in fields like biomedical imaging (cornea, zebrafish, invertebrates, biological phantoms such as starch dispersions), security (distance detection, DOF extension), and satellite imaging.
  • Momentum–position entangled beams, generated by spontaneous parametric down-conversion (SPDC), which have the potential to combine QPI with sub-shot noise imaging [41], thus enabling high-SNR imaging of low-absorbing samples, a challenging issue in both biomedical imaging and security.
The design of both quantum plenoptic devices are currently undergoing optimization by implementing a novel protocol that enables further mitigating the resolution vs. DOF compromise with respect to the one shown in Figure 2a: this protocol is based on the observation that, for any given resolution, the DOF can be maximized by correlating the standard images of two arbitrary planes, chosen in the surrounding of the object of interest, instead of imaging the focusing element [36]. Moreover, we are investigating the possibility to merge quantum plenoptic imaging with the measurement protocols developed in the context of differential ghost imaging [42].
In the following section, we will discuss all the technical solutions, both on the hardware and on the software side that we are investigating to speed up the process of generating a correlation plenoptic image. The problematic aspects can be clarified by a description of the practical process of CPI imaging that is made of the following steps:
1
Collecting N f synchronized pairs of frames from a sensor D a , with resolution N a x × N a y , and D b , with resolution N b x × N b y . Synchronization of two separate digital sensors entails technical complications that can be overcome by using two disjoint parts of the same sensor [16]. The total acquisition time is
T = N f ( τ exp + τ d ) ,
where τ exp is the frame exposure time that must be shorter than the coherence time of impinging light in order to exploit the maximal information on intensity fluctuations, while τ d is the dead time between subsequent frames, usually fixed by the employed sensors.
2
Each acquired frame is transferred to a computer to be processed. This step can occur either progressively during the capture of the subsequent frames, or at the end of the acquisition process.
3
The collected frames are used to obtain an estimate of the correlation of intensity fluctuations (1) as
Γ ( ρ a , ρ b ) 1 N f k = 1 N f I a ( k ) ( ρ a ) I b ( k ) ( ρ b ) 1 N f k = 1 N f I a ( k ) ( ρ a ) 1 N f k = 1 N f I b ( k ) ( ρ b ) ,
where I j ( k ) ( ρ j ) , with j = a , b , is the intensity measured in frame k in correspondence of the pixel on the detector D j centered on the coordinate ρ j . In this way, an information initially encoded in N f ( N a x N a y + N b x N b y ) numbers is used to reconstruct a correlation function determined by N a x N a y N b x N b y values.
The accuracy of the correlation function estimate (4) increases with the number of frames like N f [40]. However, increasing N f also linearly extends the total acquisition time T. Therefore, the combination of (1) fast and low-noise sensors, (2) methods to extract a good quality signal from smaller number of frames, and (3) tools for efficient data transfer and elaboration, is crucial in order to speed-up the acquisition process and make the CPI technology ready for real-world applications.

3. Hardware Speedup: Advanced Sensors and Ultra-Fast Computing Platforms

To improve the performances of CPI in terms of acquisition speed and data elaboration time, we are employing dedicated advanced sensors and ultra-fast computing platforms. In this section, we describe the details of the implementation and the perspectives on these fields.

3.1. SPAD Arrays as High-Resolution Time-Resolved Sensors

A relevant part of the speedup that we are seeking is determined by replacing commercial high-resolution sensors, like scientific CMOS and EMCCD cameras, with sensors based on cutting-edge technology such as single-photon avalanche diode (SPAD) arrays. A SPAD is basically a photodiode which is reversely biased above its breakdown voltage, so that a single photon which impinges onto its photosensitive area can create an electron–hole pair, triggering in turn an avalanche of secondary carriers and developing a large current on a very short timescale (picoseconds) [17,18]. This operation regime is known as Geiger mode. The SPAD output voltage is sensed by an electronic circuit and directly converted into a digital signal, further processed to store the binary information that a photon arrived, and/or the photon time of arrival. In essence, a SPAD can be seen as a photon-to-digital conversion device with exquisite temporal precision. SPADs can also be gated, in order to be sensitive only within temporal windows as short as a few nanoseconds, as shown in Figure 3. Individual SPADs can nowadays be used as the building blocks of large arrays, with each pixel circuit containing both the SPAD and the immediate photon processing logic and interconnect. Several CMOS processes are readily available and allow for tailoring both the key SPAD performance metrics and the overall sensor or imager architecture [43,44]. Sensitivity and fill-factor have for some time lagged behind those of their scientific CMOS or EMCCD counterparts but have been substantially catching up in recent years.
Based on the requirements of QPI, we have chosen to employ the SwisSPAD2 array developed by the AQUA laboratory group of EPFL, characterized by a 512 × 512 pixel resolution (see Figure 4), which is one of the widest and most advanced SPAD arrays to date [20,22]. The sensor is internally organized as two halves of 256 × 512 pixels to reduce load and skew on signal lines and enable faster operation. It is a purely binary gated imager, i.e., each pixel records either a 0 (no photon) or a 1 (one or more photons) for each frame, with basically zero readout noise. The sensor is controlled by an FPGA generating the control signals for the gating circuitry and readout sequence and collecting the pixel detection results. In the FPGA, the resulting one-bit images can be further processed, e.g., accumulated into multi-bit images, before being sent to a computer/GPU for analysis and storage. The maximum frame rate is 97.7 kfps, and the native fill factor of 10.5% can be improved by 4–5 times, for collimated light, by means of a microlens array [21] (higher values are expected from simulation after optimization); the photon detection probability is 50% (25%) at 520 nm (700 nm) and 6.5 V excess bias. The device is also characterized by low noise (typically less than 100 cps average Dark Count Rate per pixel at room temperature, with a median value about 10 times lower) and advanced circuitry for nanosecond gating. A detailed comparison of SwissSPAD2 with other large-format CMOS SPAD imaging cameras is presented in Ref. [22].
Currently, we are using the available version of SwissSPAD2, with a 512 × 256 pixel resolution, to generate sequences of frames and store them in an on-board 2 GB memory, before transferring them to a computer by a standard USB3 connection, which can be done using existing hardware. We are integrating this sensor in the prototype of chaotic-light base quantum plenoptic camera in a way that two disjoint halves of the sensor (of 256 × 256 pixels each) are used for retrieving the images of the two reference planes. The high speed of this sensor is expected to reduce the acquisition time of quantum plenoptic images by two order of magnitudes with respect to the first CPI experiment [16], in which the region of interest on the sensor was made of 700 × 700 pixels for the spatial measurement side, and 600 × 600 for the directional measurement side; 2 × 2 binning on both sides during acquisition and a subsequent 10 × 10 binning on the directional side led to the effective spatial resolution of 350 pixels, and angular resolution of 25 × 25 pixels.
We are also working towards several further optimizations of the sensor system, e.g., by developing gating for noise reduction in QPI devices. In order to employ the full sensor (512 × 512 pixels), we are implementing a synchronization mechanism for a pair of imagers by means of two FPGAs, so as to operate on a common time-base at the nanosecond level (to this end, two control circuits shall operate from a single clock and have a direct communication link). Finally, the SwissSPAD2 arrays are being integrated with a fast communication interface in order to speed up data transfer and make it possible to deliver, in a sustained way, full binary frame sequences to a GPU. The latter will run advanced algorithms for data pre-processing, image reconstruction and optimization, as we will discuss in more detail in the next subsection.

3.2. Computational Hardware Platform

In a QPI device integrated with SwissSPAD2, the acquired data rate for a single frame acquisition can be estimated to 26 Gb/s, which is beyond the reach of standard data buses. This poses great challenges in both hardware and software design. Our approach is to employ the expertise of PKH to seek careful design of electronic interconnections (buses) between sensor control electronics and processing device, theoretical refinement, and optimization of algorithms (e.g., compressive sensing [46,47]), porting to an efficient computational environment, and design of a specific acquisition electronics for optimizing data flow from the light sensors to a dedicated processing system, able to guarantee the required computational performance (e.g., exploiting GPUs or FPGA) [23,24].
The introduction of an embedded data acquisition- and processing board, integrating a GPU, aims at data pre-processing, thus significantly reducing the amount of data to be transferred to (and saved on) an external workstation. GPUs exploit a highly parallel elaboration paradigm, enabling to design algorithms that run in parallel on hundreds or thousands of cores and to make them available on embedded devices. A great advantage of GPUs is programmability: many standard tools exist (e.g., OpenCL and CUDA) that allow fast and efficient design of complex algorithms that can be injected on the fly in the GPU memory for accomplishing tasks ranging from simple filtering to advanced machine learning. Efforts are made to design a processing matrix, so that each line and column of the sensor will be managed by a dedicated portion of the heterogeneous processing platform (CPU/GPU/FPGA): the pixel series processor. Those dedicated units will be interconnected to one another to cooperate for implementing algorithms that require distributed processing on a very small scale.
The embedded acquisition-processing board is designed to best fit to SPAD array and SW application needs. The optimal system design will be evaluated by considering theoretical algorithms, engineered SW implementations, HW set-up, and HW/SW trade-off. We will identify a preliminary set of possible configurations and perform a trade-off by comparing overall performances, considering the requirements in data quality, processing speed, costs, complexity, etc.
Based on the challenging objectives to be achieved, a preliminary analysis based on COTS (Commercial-Off-The-Shelf) solutions was performed, in order to identify a set of accelerating devices addressing QPI requirements in terms of computing capability and portability. The option offered by the NVIDIA Jetson Xavier AGX board shown in Figure 5 is considered a promising candidate to achieve our goal. This device indeed offers an encouraging performance/integration ratio with low power usage and very interesting computing capabilities. Its main characteristics are reported in Table 1.
Considering the listed HW capabilities, despite the ARM processor having a limited computing power when compared to high-end desktop processors, it allows for leveraging multi-core capabilities for implementing that part of the code that will feed the quite powerful GPU device on board. In addition, the foreseen optimization strategy will require a dedicated implementation able to consider the maximum amount of memory of 16 GB available, shared with the GPU. In addition, the solution to be developed should take into account the bandwidth of about 136 GB/s of the on-board memory, which may represent a limiting factor when GPU and CPU exchange buffers. An implementation based on the CUDA framework—over OpenCL or other technologies—will be preferred to best use the NVIDIA device. Finally, given the capability of the NVDLA Engines to perform multiplications and accumulations in a very fast way, we consider it interesting to perform an assessment of how to exploit these devices for implementing the QPI-specific correlation functions and/or other multiplication/sum intensive computations.
Along with the assessment of a dedicated HW solution, further optimizations applicable at the algorithmic level have been considered in order to enrich the engineered device with a highly customized algorithmic workflow able to exploit the peculiarities of the CPI technique and its related input data. More specifically, we are analyzing those steps of QPI processing that appear as more computationally demanding, thus representing a bottleneck for performances. To this end, tailored reshaping operations applied over the original three-dimensional multi-frame structure of the input data were explored, to facilitate the development of a parallelized elaboration paradigm for evaluating the CPI-related correlation function. In addition, the peculiar feature of the input dataset to be acquired by SPAD sensors as one-bit images will be valued through dedicated implementations able to gain from intensive multiplication/sum math among binary-valued variables.

4. Quantum and Classical Image-Processing Algorithms

Further reduction of the acquisition speed and the optimization of the elaboration time is addressed by exploiting dedicated quantum and classical image processing, as well as novel mathematical methods coming on one hand from compressive sensing, and on the other hand from quantum tomography and quantum Fisher information.

4.1. Compressive Sensing

In order to reduce the amount of required data (at present 10 3 10 4 frames) by at least one order of magnitude, we are exploring different approaches. First, we are investigating the opportunity of implementing compression techniques for improving bus bandwidth utilization, thus acting on data transfer optimization. Data compression may also rely on manipulation of raw input data to determine only the relevant information in a sort of information bottleneck paradigm, in which software nodes in a lattice provide their contribution to a probable reconstruction of the actual decompressed data. This is very similar to artificial neural network structures, where the network stores a representation of a phenomenon and returns a response based on some similarity rating versus the observed data. Moreover, the availability of advanced processing technologies allows the investigation and implementation of alternative techniques able to exploit the sparsity of the retrieved signal to reconstruct information from a heavily sub-sampled signal, by compressive sensing techniques [46,47].
As in other correlation imaging techniques, such as ghost imaging (GI), in the CPI protocol, the object image is reconstructed by performing correlation measurements between intensities at two disjoint detectors. Katz et al. [48] demonstrated that conventional GI offers the possibility to perform compressive sensing (CS) boosting the recovered image quality.
CS theory asserts the possibility of recovering the object image from far fewer samples than required by the Nyquist–Shannon sampling theorem, and it relies on two main principles: sparsity of the signal (once expressed in a proper basis) and incoherence between the sensing matrix and the sparsity basis.
In conventional GI, the transmission measured for each speckle pattern represents a projection of the object image and CS finds the sparsest image among all the possible images consistent with the projections. In practice, CS reconstruction algorithms solve a convex optimization problem, looking for the image which minimizes the L 1 norm in the sparse basis among the ones compatible with the bucket measurements, see Refs. [49,50,51,52] for a review.
We are developing a novel protocol reducing the number of measurements required for image recovery by an order of magnitude. Once properly refocused, a single acquisition can be fed into the compressive sensing algorithm several times thus exploiting the plenoptic properties of the acquired data and increasing the signal-to-noise ratio of the final refocused image. We tested the CS-CPI algorithm with numerical simulations, as summarized in Figure 6. In (a), a double-slit mask is reconstructed by correlations measurements considering N = 6000 frames; in (b), the standard reconstruction is repeated considering only the 10 % of available frames, while, in (c), the CS reconstruction using the same reduced set of measured data. In addition to a data-fidelity term corresponding to a linear regression, we penalized the L 1 norm of the reconstructed image to account for its sparsity in the 2 D D C T domain. The resulting optimization problem is known as the LASSO (Least Absolute Shrinkage and Selection Operator) [53]. We employed the coordinate descent algorithm to efficiently solve it, and we set the regularization parameter, controlling the degree of sparsity of the estimated coefficients, by cross-validation. In this proof-of-concept experiment, we simply use Pearson’s correlation coefficient to measure the similarity between the reconstructions obtained using the restricted dataset and the image obtained considering all the N = 6000 frames.

4.2. Plenoptic Tomography

Novel reconstruction algorithm with the advantage of real 3D image lack of artifacts is based on the idea of recasting plenoptic imaging as an absorption tomography. The classical refocusing algorithm is based on superimposing images of the 3D scene from different viewpoints, which necessarily results in the contribution of out-of-focus parts of the scene, the effect responsible for the existence of the blurred part of 3D image reconstruction. The tomography approach is based on a different principle and provides sufficient axial and transversal resolution without artifacts. For this purpose, the object space is divided into voxels and the goal is to reconstruct absorption coefficients of each voxel. The measured correlation coefficient of a pair of points from the correlated planes is transformed to be linearly proportional to the attenuation along the ray path connecting those two points. Classical inverse Radon transform can be used in this scenario to obtain a 3D image, but using the Maximum Likelihood absorption tomography algorithm [54] further enhances the quality of the tomography reconstruction and performs well even for a small range of projections angels. In fact, advanced tomographic reconstruction algorithms based on the Maximum Likelihood principle are more resistant to noise and require fewer acquisitions, for a given precision, in comparison to standard tomographic protocols. We are investigating special tools to deal with informationally incomplete detection schemes for very high resolution and optimal methods for data analysis based on convex programming tools.

4.3. Quantum Tomography and Quantum Fisher Information

A further quantum approach to image analysis and detection schemes will be employed to achieve super-resolution (or, eventually, for maintaining the desired resolution and speeding up acquisition and elaboration of the quantum plenoptic images) and to compare correlation plenoptic detection scheme to the ultimate quantum limits. The basic concept underpinning the Fisher Information super-resolution imaging is the formal mathematical analogy between the classical wave optics and quantum theory, which makes it possible to apply the advanced tools of quantum detection and estimation theory to classical imaging and metrology [29,55]. The advantage of this approach lies in the ability to quantify the performance of the imaging setup based on rigorous statistical quantities. Inspired by the quantum theory of detection and estimation, quantum Fisher information, a quantity connected with the ultimate limits allowed by nature, is computed for simple 3D imaging scenarios like localization and resolution of two points in the object 3D space [30,56,57,58]. For example, one might be interested in measuring the separation of two point-like sources and seeking the optimal detection scheme, extracting the maximum amount of information about this parameter. We shall thus employ quantum Fisher information to design optimal measurement protocols within the quantum plenoptic devices, able to extract specific relevant information, for enhanced resolution, with a minimal number of acquisitions. The question of the minimal number of detections over the correlation planes which achieves acceptable reconstruction quality is of relevance because of the direct connection to the amount of processed data and can lead to reduced time for data processing.

5. Perspectives

In the context of the Qu3D project, all the developments and technologies presented in the previous sections will be integrated into the implementation of two quantum plenoptic imaging (QPI) devices, namely,
  • a compact single-lens plenoptic camera for 3D imaging, based on the photon number correlations of a dim chaotic light source;
  • an ultra-low noise plenoptic device, based on the correlation properties of entangled photon pairs emitted by spontaneous parametric down-conversion (SPDC), enabling 3D imaging of low-absorbing samples, at the shot-noise limit or below.
Our objective is to achieve, in both devices, high resolution, whether diffraction-limited or sub-Rayleigh, combined with a DOF larger by even one order of magnitude compared to standard imaging. The science and technology developed in the project will contribute to establishing a solid baseline of knowledge and skills for the development of a new generation of imaging devices, from quantum digital cameras enhanced by refocusing capability to quantum 3D microscopes [37] and space imaging devices.

6. Conclusions

We have presented the challenging research directions we are following to achieve practical quantum 3D imaging: minimizing the acquisition speed without renouncing to high SNR, high resolution, and large DOF [1,2,3,4,5]. Our work represents a significant advance with respect to the state-of-the-art of both classical and quantum imaging, as it enhances the performances of plenoptic imaging and dramatically speeds up quantum imaging, thus facilitating the real-world deployment of quantum plenoptic cameras.
This ambitious goal will be facilitated by working toward the extension of the reach of quantum imaging to other fields of science, and opening the way to new opportunities and applications, including the prospects of offering new medical diagnostic tools, such as 3D microscopes for biomedical imaging, as well as novel devices (quantum digital cameras enhanced by 3D imaging, refocusing, and distance detection capabilities) for security, space imaging, and industrial inspection. The collaboration crosses the traditional boundaries between the involved disciplines: quantum imaging, ultra-fast cameras, low-level programming of GPU, compressive sensing, quantum information theory, and signal processing.

Author Contributions

Conceptualization, M.D., Z.H., J.Ř., S.B., P.M., E.C. and C.B.; methodology, F.V.P., G.M., M.P. (Martin Paur), B.S., S.B., P.M., E.C. and C.B.; software, F.S. (Francesco Scattarella), G.M., L.M., M.P. (Michal Peterek), M.I. (Michele Iacobellis), I.P., F.S. (Francesca Santoro), P.M., A.U. and M.W.; firmware, P.M., A.U. and M.W.; validation, F.D.L., G.M., M.I. (Michele Iacobellis), P.M., A.U. and M.W.; formal analysis, F.D.L., M.I. (Michele Iacobellis), I.P. and F.S. (Francesca Santoro); setup implementation, P.M., A.U. and M.W.; setup design, F.D.L., D.G. and S.V.; writing—original draft preparation, F.V.P., I.P., F.S. (Francesca Santoro), S.B. and C.B.; writing—review and editing, F.D.L., F.V.P., I.P., F.S. (Francesca Santoro) and C.B.; supervision, M.D., A.G., F.V.P., C.A., L.A., E.C. and C.B.; project administration, M.D., C.A. and L.A.; funding acquisition, M.D., C.A., M.I. (Maria Ieronymaki) and C.B. All authors have read and agreed to the published version of the manuscript.

Funding

Project Qu3D is supported by the Italian Istituto Nazionale di Fisica Nucleare, the Swiss National Science Foundation (grant 20QT21_187716 “Quantum 3D Imaging at high speed and high resolution”), the Greek General Secretariat for Research and Technology, the Czech Ministry of Education, Youth and Sports, under the QuantERA program, which has received funding from the European Union’s Horizon 2020 research and innovation program.

Informed Consent Statement

Not applicable.

Data Availability Statement

Relevant data are available from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
SPADSingle-Photon Avalanche Diode
QPIQuantum Plenoptic Imaging
fpsframes per second
GPUGraphics Processing Unit
CPICorrelation Plenoptic Imaging
SNRSignal-to-Noise Ratio
DOFDepth of Field
EPREinstein–Podolski–Rosen
FPGAField-Programmable Gate Array
CPUCentral Processing Unit
GIGhost Imaging
CSCompressive Sensing
DCTDiscrete Cosine Transform

References

  1. Sansoni, G.; Trebeschi, M.; Docchio, F. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation. Sensors 2009, 9, 568. [Google Scholar] [CrossRef]
  2. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photonics 2011, 3, 128. [Google Scholar] [CrossRef]
  3. Hansard, M.; Lee, S.; Choi, O.; Horaud, R. Time of Flight Cameras: Principles, Methods, and Applications, 2013th ed.; Springer: Berlin, Germany, 2013. [Google Scholar]
  4. Mertz, J. Introduction to Optical Microscopy; Cambridge University Press: Cambridge, UK, 2019. [Google Scholar]
  5. Prevedel, R.; Yoon, Y.-G.; Hoffmann, M.; Pak, N.; Wetzstein, G.; Kato, S.; Schrödel, T.; Raskar, R.; Zimmer, M.; Boyden, E.S.; et al. Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy. Nat. Methods 2014, 11, 727. [Google Scholar] [CrossRef]
  6. Hall, E.M.; Thurow, B.S.; Guildenbecher, D.R. Comparison of three-dimensional particle tracking and sizing using plenoptic imaging and digital in-line holography. Appl. Opt. 2016, 55, 6410–6420. [Google Scholar] [CrossRef]
  7. Kim, M.K. Principles and techniques of digital holographic microscopy. SPIE Rev. 2010, 1, 018005. [Google Scholar] [CrossRef] [Green Version]
  8. Zheng, G.; Horstmeyer, R.; Yang, C. Wide-field, high-resolution Fourier ptychographic microscopy. Nat. Photonics 2013, 7, 739. [Google Scholar] [CrossRef] [PubMed]
  9. Albota, M.A.; Aull, B.F.; Fouche, D.G.; Heinrichs, R.M.; Kocher, D.G.; Marino, R.M.; Mooney, J.G.; Newbury, N.R.; O’Brien, M.E.; Player, B.E.; et al. Three-dimensional imaging laser radars with Geiger-mode avalanche photodiode arrays. Lincoln Lab. J. 2002, 13, 351–370. [Google Scholar]
  10. Marino, R.M.; Davis, W.R. Jigsaw: A foliage-penetrating 3D imaging laser radar system. Lincoln Lab. J. 2005, 15, 23–36. [Google Scholar]
  11. Hansard, M.; Lee, S.; Choi, O.; Horaud, R.P. Time-of-Flight Cameras: Principles, Methods and Applications; Springer Science & Business Media: Berlin, Germany, 2012. [Google Scholar]
  12. McCarthy, A.; Krichel, N.J.; Gemmell, N.R.; Ren, X.; Tanner, M.G.; Dorenbos, S.N.; Zwiller, V.; Hadfield, R.H.; Buller, G.S. Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection. Opt. Express 2013, 21, 8904–8915. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. McCarthy, A.; Ren, X.; Della Frera, A.; Gemmell, N.R.; Krichel, N.J.; Scarcella, C.; Ruggeri, A.; Tosi, A.; Buller, G.S. Kilometer-range depth imaging at 1550 nm wavelength using an InGaAs/InP single-photon avalanche diode detector. Opt. Express 2013, 21, 22098–22113. [Google Scholar] [CrossRef] [Green Version]
  14. Altmann, Y.; McLaughlin, S.; Padgett, M.J.; Goyal, V.K.; Hero, A.O.; Faccio, D. Quantum-inspired computational imaging. Science 2018, 361, eaat2298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Mertz, J. Introduction to Optical Microscopy; Roberts and Company Publishers: Englewood, CO, USA, 2010; Volume 138. [Google Scholar]
  16. Pepe, F.V.; Di Lena, F.; Mazzilli, A.; Garuccio, A.; Scarcelli, G.; D’Angelo, M. Diffraction-limited plenoptic imaging with correlated light. Phys. Rev. Lett. 2017, 119, 243602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Zappa, F.; Tisa, S.; Tosi, A.; Cova, S. Principles and features of single-photon avalanche diode arrays. Sens. Actuators A 2007, 140, 103–112. [Google Scholar] [CrossRef]
  18. Charbon, E. Single-photon imaging in complementary metal oxide semiconductor processes. Philos. Trans. R. Soc. A 2014, 372, 20130100. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Antolovic, I.M.; Bruschini, C.; Charbon, E. Dynamic range extension for photon counting arrays. Opt. Express 2018, 26, 22234. [Google Scholar] [CrossRef] [PubMed]
  20. Veerappan, C.; Charbon, E. A low dark count p-i-n diode based SPAD in CMOS technology. IEEE Trans. Electron Devices 2016, 63, 65. [Google Scholar] [CrossRef]
  21. Antolovic, I.M.; Ulku, A.C.; Kizilkan, E.; Lindner, S.; Zanella, F.; Ferrini, R.; Schnieper, M.; Charbon, E.; Bruschini, C. Optical-stack optimization for improved SPAD photon detection efficiency. Proc. SPIE 2019, 10926, 359–365. [Google Scholar]
  22. Ulku, A.C.; Bruschini, C.; Antolovic, I.M.; Charbon, E.; Kuo, Y.; Ankri, R.; Weiss, S.; Michalet, X. A 512 × 512 SPAD image sensor with integrated gating for widefield FLIM. IEEE J. Sel. Top. Quantum Electron. 2019, 25, 6801212. [Google Scholar] [CrossRef]
  23. Nguyen, A.H.; Pickering, M.; Lambert, A. The FPGA implementation of an image registration algorithm using binary images. J. Real Time Image Pr. 2016, 11, 799. [Google Scholar] [CrossRef]
  24. Holloway, J.; Kannan, V.; Zhang, Y.; Chandler, D.M.; Sohoni, S. GPU Acceleration of the Most Apparent Distortion Image Quality Assessment Algorithm. J. Imaging 2018, 4, 111. [Google Scholar] [CrossRef] [Green Version]
  25. Dadkhah, M.; Deen, M.J.; Shirani, S. Compressive Sensing Image Sensors-Hardware Implementation. Sensors 2013, 13, 4961. [Google Scholar] [CrossRef] [Green Version]
  26. Chan, S.H.; Elgendy, O.A.; Wang, X. Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors. Sensors 2016, 16, 1961. [Google Scholar] [CrossRef] [Green Version]
  27. Rontani, D.; Choi, D.; Chang, C.-Y.; Locquet, A.; Citrin, D.S. Compressive Sensing with Optical Chaos. Sci. Rep. 2016, 6, 35206. [Google Scholar] [CrossRef] [Green Version]
  28. Gul, M.S.K.; Gunturk, B.K. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks. IEEE Trans. Image Process. 2018, 27, 2146. [Google Scholar] [CrossRef] [Green Version]
  29. Motka, L.; Stoklasa, B.; D’Angelo, M.; Facchi, P.; Garuccio, A.; Hradil, Z.; Pascazio, S.; Pepe, F.V.; Teo, Y.S.; Rehacek, J.; et al. Optical resolution from Fisher information. EPJ Plus 2016, 131, 130. [Google Scholar] [CrossRef] [Green Version]
  30. Řeháček, J.; Paúr, M.; Stoklasa, B.; Koutný, D.; Hradil, Z.; Sánchez-Soto, L.L. Intensity-based axial localization at the quantum limit. Phys. Rev. Lett. 2019, 123, 193601. [Google Scholar] [CrossRef] [PubMed]
  31. QuantERA Call 2019. Available online: https://www.quantera.eu/calls-for-proposals/call-2019 (accessed on 22 March 2021).
  32. Raytrix. 3D Light Field Camera Technology. Available online: https://raytrix.de (accessed on 22 March 2021).
  33. D’Angelo, M.; Pepe, F.V.; Garuccio, A.; Scarcelli, G. Correlation Plenoptic Imaging. Phys. Rev. Lett. 2016, 116, 223602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  34. Pepe, F.V.; Di Lena, F.; Garuccio, A.; Scarcelli, G.; D’Angelo, M. Correlation Plenoptic Imaging with Entangled Photons. Technologies 2016, 4, 17. [Google Scholar] [CrossRef] [Green Version]
  35. Pepe, F.V.; Vaccarelli, O.; Garuccio, A.; Scarcelli, G.; D’Angelo, M. Exploring plenoptic properties of correlation imaging with chaotic light. J. Opt. 2017, 19, 114001. [Google Scholar] [CrossRef] [Green Version]
  36. Di Lena, F.; Massaro, G.; Lupo, A.; Garuccio, A.; Pepe, F.V.; D’Angelo, M. Correlation plenoptic imaging between arbitrary planes. Opt. Express 2020, 28, 35857. [Google Scholar] [CrossRef] [PubMed]
  37. Scagliola, A.; Di Lena, F.; Garuccio, A.; D’Angelo, M.; Pepe, F.V. Correlation plenoptic imaging for microscopy applications. Phys. Lett. A 2020, 384, 126472. [Google Scholar] [CrossRef]
  38. D’Angelo, M.; Shih, Y.H. Quantum Imaging. Laser Phys. Lett. 2005, 2, 567–596. [Google Scholar] [CrossRef]
  39. Gatti, A.; Brambilla, E.; Bache, M.; Lugiato, L.A. Ghost Imaging with Thermal Light: Comparing Entanglement and Classical Correlation. Phys. Rev. Lett. 2004, 93, 093602. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  40. Scala, G.; D’Angelo, M.; Garuccio, A.; Pascazio, S.; Pepe, F.V. Signal-to-noise properties of correlation plenoptic imaging with chaotic light. Phys. Rev. A 2019, 99, 053808. [Google Scholar] [CrossRef] [Green Version]
  41. Brida, G.; Genovese, M.; Ruo Berchera, I. Experimental realization of sub-shot-noise quantum imaging. Nat. Photonics 2010, 4, 227. [Google Scholar] [CrossRef] [Green Version]
  42. Ferri, F.; Magatti, D.; Lugiato, L.A.; Gatti, A. Differential Ghost Imaging. Phys. Rev. Lett 2010, 104, 253603. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Bruschini, C.; Homulle, H.; Antolovic, I.M.; Burri, S.; Charbon, E. Single-photon avalanche diode imagers in biophotonics: Review and outlook. Light Sci. Appl. 2019, 8, 87. [Google Scholar] [CrossRef]
  44. Caccia, M.; Nardo, L.; Santoro, R.; Schaffhauser, D. Silicon photomultipliers and SPAD imagers in biophotonics: Advances and perspectives. Nucl. Instrum. Methods Phys. Res. A 2019, 926, 101–117. [Google Scholar] [CrossRef]
  45. Ulku, A.; Ardelean, A.; Antolovic, M.; Weiss, S.; Charbon, E.; Bruschini, C.; Michalet, X. Wide-field time-gated SPAD imager for phasor-based FLIM applications. Methods Appl. Fluoresc. 2020, 8, 024002. [Google Scholar] [CrossRef]
  46. Zanddizari, H.; Rajan, S.; Zarrabi, H. Increasing the quality of reconstructed signal in compressive sensing utilizing Kronecker technique. Biomed. Eng. Lett. 2018, 8, 239. [Google Scholar] [CrossRef]
  47. Mertens, L.; Sonnleitner, M.; Leach, J.; Agnew, M.; Padgett, M.J. Image reconstruction from photon sparse data. Sci. Rep. 2017, 7, 42164. [Google Scholar] [CrossRef] [Green Version]
  48. Katz, O.; Bromberg, Y.; Silberberg, Y. Compressive ghost imaging. Appl. Phys. Lett. 2009, 95, 131110. [Google Scholar] [CrossRef] [Green Version]
  49. Jiying, L.; Jubo, Z.; Chuan, L.; Shisheng, H. High-quality quantum-imaging algorithm and experiment based on compressive sensing. Opt. Lett. 2010, 35, 1206–1208. [Google Scholar] [CrossRef] [PubMed]
  50. Aßmann, M.; Bayer, M. Compressive adaptive computational ghost imaging. Sci. Rep. 2013, 3, 1545. [Google Scholar]
  51. Chen, Y.; Cheng, Z.; Fan, X.; Cheng, Y.; Liang, Z. Compressive sensing ghost imaging based on image gradient. Optik 2019, 182, 1021–1029. [Google Scholar]
  52. Liu, H.-C. Imaging reconstruction comparison of different ghost imaging algorithms. Sci. Rep. 2020, 10, 14626. [Google Scholar] [CrossRef] [PubMed]
  53. Tibshirani, R. Regression shrinkage and selection via the lasso: A retrospective. J. R. Statist. Soc. B 2011, 73, 273–282. [Google Scholar] [CrossRef]
  54. Řeháček, J.; Hradil, Z.; Zawisky, M.; Treimer, W.; Strobl, M. Maximum-likelihood absorption tomography. EPL 2002, 59, 694. [Google Scholar] [CrossRef] [Green Version]
  55. Hradil, Z.; Rehacek, J.; Fiurasek, J.; Jezek, M. Maximum Likelihood Methods in Quantum Mechanics, in Quantum State Estimation. In Lecture Notes in Physics; Paris, M.G.A., Rehacek, J., Eds.; Springer: Berlin, Germany, 2004; pp. 59–112. [Google Scholar]
  56. Rehacek, J.; Hradil, Z.; Stoklasa, B.; Paur, M.; Grover, J.; Krzic, A.; Sanchez-Soto, L.L. Multiparameter quantum metrology of incoherent point sources: Towards realistic superresolution. Phys. Rev. A 2017, 96, 062107. [Google Scholar] [CrossRef] [Green Version]
  57. Paur, M.; Stoklasa, B.; Hradil, Z.; Sanchez-Soto, L.L.; Rehacek, J. Achieving the ultimate optical resolution. Optica 2016, 3, 1144. [Google Scholar] [CrossRef]
  58. Paur, M.; Stoklasa, B.; Grover, J.; Krzic, A.; Sanchez-Soto, L.L.; Hradil, Z.; Rehacek, J. Tempering Rayleigh’s curse with PSF shaping. Optica 2018, 5, 1177. [Google Scholar] [CrossRef]
Figure 1. (a) the scheme of a conventional plenoptic imaging (PI) device: the image of the object is focused on a microlens array, while each microlens focuses an image of the main lens on the pixels behind. Such a configuration entails a loss of spatial resolution proportional to the gain in directional resolution; (b) shows the scheme of a correlation plenoptic imaging (CPI) setup, in which directional information is obtained by correlating the signals retrieved by a sensor on which the object is focused with a sensor that collects the image of the light source. The image in (a) is reproduced with the permission from Ref. [16], copyright American Physical Society, 2017.
Figure 1. (a) the scheme of a conventional plenoptic imaging (PI) device: the image of the object is focused on a microlens array, while each microlens focuses an image of the main lens on the pixels behind. Such a configuration entails a loss of spatial resolution proportional to the gain in directional resolution; (b) shows the scheme of a correlation plenoptic imaging (CPI) setup, in which directional information is obtained by correlating the signals retrieved by a sensor on which the object is focused with a sensor that collects the image of the light source. The image in (a) is reproduced with the permission from Ref. [16], copyright American Physical Society, 2017.
Applsci 11 06414 g001
Figure 2. (a) shows the resolution limits, as a function of the longitudinal position, of the image of a double-slit mask with center-to-center distance d equal to twice the slit width; here, CPI outperforms both conventional imaging and standard PI with 3 × 3 directional resolution. The evident asymmetry of the CPI curve is due to the existence of two planes in which the object is focused: one at z b = z a and one at z b = 0 (see [16]). Plots in (b) show a result of a simulation: the target is moved from the focused plane (top left) to an out-of-focus plane (top right). Starting from this position, we show the results of PI refocusing with 3 × 3 directional resolution (bottom left) and the CPI refocusing (bottom right); (c) shows the results of an experiment [16] in which the standard image of a triple slit was completely blurred (top), while the image obtained by CPI (bottom) was made fully visible by exploiting information on light direction. Plots in (c) are reproduced with the permission from Ref. [16], copyright American Physical Society, 2017.
Figure 2. (a) shows the resolution limits, as a function of the longitudinal position, of the image of a double-slit mask with center-to-center distance d equal to twice the slit width; here, CPI outperforms both conventional imaging and standard PI with 3 × 3 directional resolution. The evident asymmetry of the CPI curve is due to the existence of two planes in which the object is focused: one at z b = z a and one at z b = 0 (see [16]). Plots in (b) show a result of a simulation: the target is moved from the focused plane (top left) to an out-of-focus plane (top right). Starting from this position, we show the results of PI refocusing with 3 × 3 directional resolution (bottom left) and the CPI refocusing (bottom right); (c) shows the results of an experiment [16] in which the standard image of a triple slit was completely blurred (top), while the image obtained by CPI (bottom) was made fully visible by exploiting information on light direction. Plots in (c) are reproduced with the permission from Ref. [16], copyright American Physical Society, 2017.
Applsci 11 06414 g002
Figure 3. SwissSPAD2 gate window profile. The transition times and the gate width are annotated in the figure. The gate width is user-programmable, and the minimum gate width in the internal laser trigger mode is 10.8 ns. The image is reproduced with permission from Ref. [45], copyright The Authors, published by IOP Publishing Ltd., 2020.
Figure 3. SwissSPAD2 gate window profile. The transition times and the gate width are annotated in the figure. The gate width is user-programmable, and the minimum gate width in the internal laser trigger mode is 10.8 ns. The image is reproduced with permission from Ref. [45], copyright The Authors, published by IOP Publishing Ltd., 2020.
Applsci 11 06414 g003
Figure 4. SwissSPAD2 photomicrograph (left) and pixel schematics (right). The pixel consists of 11 NMOS transistors, 7 with thick-oxide, and 4 with thin-oxide gate. The pixel stores a binary photon count in its memory capacitor. The in-pixel gate defines the time window, with respect to a 20 MHz external trigger signal, in which the pixel is sensitive to photons. The image is reproduced with permission from Ref. [22], copyright IEEE, 2018.
Figure 4. SwissSPAD2 photomicrograph (left) and pixel schematics (right). The pixel consists of 11 NMOS transistors, 7 with thick-oxide, and 4 with thin-oxide gate. The pixel stores a binary photon count in its memory capacitor. The in-pixel gate defines the time window, with respect to a 20 MHz external trigger signal, in which the pixel is sensitive to photons. The image is reproduced with permission from Ref. [22], copyright IEEE, 2018.
Applsci 11 06414 g004
Figure 5. Qu3D scenario for GPU parallel processing based on NVIDIA Volta architecture as provided by an NVIDIA Jetson AGX Xavier device.
Figure 5. Qu3D scenario for GPU parallel processing based on NVIDIA Volta architecture as provided by an NVIDIA Jetson AGX Xavier device.
Applsci 11 06414 g005
Figure 6. (a) double-slit image reconstruction obtained by correlations measurements, considering N = 6000 frames and two detectors characterized by a 128 × 128 and a 10 × 10 pixel resolution; (b) the standard reconstruction is repeated considering only the 10 % of the available frames, chosen randomly; (c) compressive sensing reconstruction using the same dataset as in (b). While in the first case, the Pearson’s correlation coefficient is r r e d = 0.55 ; in the latter case, the coefficient is increased to r C S = 0.81 .
Figure 6. (a) double-slit image reconstruction obtained by correlations measurements, considering N = 6000 frames and two detectors characterized by a 128 × 128 and a 10 × 10 pixel resolution; (b) the standard reconstruction is repeated considering only the 10 % of the available frames, chosen randomly; (c) compressive sensing reconstruction using the same dataset as in (b). While in the first case, the Pearson’s correlation coefficient is r r e d = 0.55 ; in the latter case, the coefficient is increased to r C S = 0.81 .
Applsci 11 06414 g006
Table 1. NVIDIA Jetson Xavier AGX: Technical Specifications.
Table 1. NVIDIA Jetson Xavier AGX: Technical Specifications.
Jetson Xavier AGX
GPU512-core NVIDIA Volta™ GPU with 64 Tensor Cores
CPU8-core NVIDIA Carmel Arm®v8.2 64-bit CPU 8MB L2 + 4MB L3
Memory16 GB 256-bit LPDDR4x 136.5GB/s
PCIE1 × 8 + 1 × 4 + 1 × 2 + 2 × 1 (PCIe Gen4, Root Port & Endpoint)
DL Accelerator2× NVDLA Engines
Vision Accelerator7-Way VLIW Vision Processor
Connectivity10/100/1000 BASE-T Ethernet
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Abbattista, C.; Amoruso, L.; Burri, S.; Charbon, E.; Di Lena, F.; Garuccio, A.; Giannella, D.; Hradil, Z.; Iacobellis, M.; Massaro, G.; et al. Towards Quantum 3D Imaging Devices. Appl. Sci. 2021, 11, 6414. https://doi.org/10.3390/app11146414

AMA Style

Abbattista C, Amoruso L, Burri S, Charbon E, Di Lena F, Garuccio A, Giannella D, Hradil Z, Iacobellis M, Massaro G, et al. Towards Quantum 3D Imaging Devices. Applied Sciences. 2021; 11(14):6414. https://doi.org/10.3390/app11146414

Chicago/Turabian Style

Abbattista, Cristoforo, Leonardo Amoruso, Samuel Burri, Edoardo Charbon, Francesco Di Lena, Augusto Garuccio, Davide Giannella, Zdeněk Hradil, Michele Iacobellis, Gianlorenzo Massaro, and et al. 2021. "Towards Quantum 3D Imaging Devices" Applied Sciences 11, no. 14: 6414. https://doi.org/10.3390/app11146414

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop