**Polarization-Sensitive Digital Holographic Imaging for Characterization of Microscopic Samples: Recent Advances and Perspectives**

#### **Giuseppe Coppola and Maria Antonietta Ferrara \***

National Research Council (CNR), Institute of Applied Sciences and Intelligent Systems, Via Pietro Castellino 111, 80131 Naples, Italy; giuseppe.coppola@cnr.it

**\*** Correspondence: antonella.ferrara@na.isasi.cnr.it

Received: 28 May 2020; Accepted: 22 June 2020; Published: 29 June 2020

#### **Featured Application: A simple and fast measure of the state of polarization of vector optical beams is a very important topic to study new optical effects and their applications in several fields, such as microelectronics, micro-photonics, remote sensing and bioimaging.**

**Abstract:** Polarization-sensitive digital holographic imaging (PS-DHI) is a recent imaging technique based on interference among several polarized optical beams. PS-DHI allows simultaneous quantitative three-dimensional reconstruction and quantitative evaluation of polarization properties of a given sample with micrometer scale resolution. Since this technique is very fast and does not require labels/markers, it finds application in several fields, from biology to microelectronics and micro-photonics. In this paper, a comprehensive review of the state-of-the-art of PS-DHI techniques, the theoretical principles, and important applications are reported.

**Keywords:** digital holography; polarization sensitive imaging; birefringence; state of polarization (SoP)

#### **1. Introduction**

Digital holography (DH) is a fascinating alternative to conventional microscopy since it allows three-dimensional (3D) reconstruction, phase contrast images, and an improved focal depth [1–5]. Basically, DH consists of an interference fringe pattern between a reference unperturbed beam and an object beam, that changes its characteristics by passing through a sample. The interference pattern (hologram) is acquired by a digital sensor array. Its post-processing achieves a 3D quantitative image of the sample by a numerical refocusing of a 2D image at different object planes [6]. When DH is implemented in an optical microscope, the objective lens provides a magnified image allowing to reconstruct amplitude and phase-contrast images with a spatial resolution of less than 1μm in all dimensions [7].

Digital holographic imaging (DHI) has several interesting features including high-resolution, very fast acquisition, and 4D (3D + time) characterization of samples [8–10]. These properties are very useful, for example, when the specimen is moving or when the sample is subjected to external stimuli that can alter its shape and size, such as electrical, magnetic or mechanical forces, chemical corrosion, or evaporation and deposition of further materials. Moreover, DHI is a non-contact and non-invasive technique, allowing label-free quantitative phase analysis of living cells; thus, measurements do not require the introduction of a tag, so cells are not altered. This approach can provide useful information that can be interpreted into many underlying biological processes.

During the last decade, DHI has experienced several technological developments, including the integration of DHI with complementary characterization techniques (e.g., Raman spectroscopy or scanning electron microscope [11–13]). A further important extension of DHI is the possibility to quantitative measure the state of polarization (SoP) modified by a sample [14–17] and so evaluate its birefringent and/or dichroic proprieties, which are frequently related to the micro- or even ultra-structure of the sample itself [18]. Therefore, the characterization of these proprieties and the detection of their eventual variations, that can be due to either stress and strain in a given material or disordered microstructure in biological specimens, could lead to a better understanding of the process involved in a broad variety of applications.

Since SoP is one of the fundamental properties of light, its evaluation has attracted a growing interest in both the basic researches and practical applications of optics, intending to study novel optical phenomena and new applications. Thus, the experimental evaluation of the SoP has become a fast-rising subject. Typically, polarization imaging has been carried out with different approaches—for example by using real-time polarization phase-shifting system [19], polarization contrast with near-field scanning optical microscopy [20], optical coherence tomography [21,22], and Pol-Scope [23]. However, these techniques need different image acquisitions, generally obtained at diverse orientations of birefringent optical components (e.g., polarizers, quarter-wave, and/or half-wave plates) to retrieve the polarization state. The great advantage offered by polarization-sensitive digital holographic imaging (PS-DHI) is the possibility to use a single acquisition to retrieve the full polarization state of the sample under observation, therefore gaining in speed and simplicity.

This review paper aims to provide an overview of the state-of-the-art in PS-DHI. In the following sections, some basic concepts will be introduced for describing polarization of light and commonly used technical approaches for realizing PS-DHI. Then, some recent and important applications of PS-DHIM in both the biomedical field and non-biomedical use will be discussed.

#### **2. Theoretical Background**

The Stokes vectors and Müeller matrices allow a whole study of the polarization state for fully polarized, partially polarized and even unpolarized light, comprising the optical axis and the degree of polarization. On the other hand, the Jones vectors, that can be useful only for completely polarized light, are more appropriate for problems concerning coherent light (see Appendix A). As a general rule, the Jones vectors are useful for problems involving amplitude superposition, while the Müeller matrices are applied for problems involving intensity superposition [24]. Different approaches of PS-DHI have been proposed in the literature, however, the basic idea is to generate a hologram of the sample through the interference between the object wave and two orthogonally polarized reference waves, producing in this way two fringe patterns. The hologram of the magnified sample is recorded by a digital camera (such as a charge-coupled device or an active pixel sensor). The numerical reconstruction of such hologram leads to two wavefronts, one relative to each reference wave, and thus, one for each perpendicular state of polarization [15]. Basically, PS-DHI approaches can be classified in two groups—(i) those which allow measurement of Jones vectors or Jones matrices and (ii) those which give information on Stokes vectors or Müller matrices. Since holography needs a uniform laser beam, especially regarding the flatness of phase front and the extended depth of field, in both approaches, (quasi)-monochromic light and perfect plane wavefronts are considered. However, the realistic intensity distribution of laser sources is described by Gaussian function, leading to problems in holographic-based applications, such as a reduced image contrast. These issues can be overcome by implementing beam shaping systems built on the base of field mapping refractive beam shapers like πShaper [25].

#### *2.1. PS-DHI for Jones Formalism*

In most polarimetric techniques, SoP parameters can be evaluated by applying more or less complex algorithms to various images acquired with different settings of polarization-analyzing components (polarizers, rotators, and retarders). Since these procedures need several rotations of the analyzing optics, the acquisition time is very long compared to the performances of a digital camera. Several solutions were proposed in the literature to improve the temporal resolution, such as the possibility to use a liquid-crystal universal compensator [23]; however, the goal was reached only with techniques that allow recording all parameters of the polarization state through the acquisition of a single image. Hence, since the pioneering paper published by Ohtsuka and Oka, which generated the interference between two orthogonal linearly polarized reference waves and an object wave [26] using a Mach-Zender interferometer, several other published works followed this approach [15,27,28].

The typical experimental configuration for the recording of polarization holograms is illustrated in Figure 1. It consists of a modified Mach–Zehnder interferometer with two reference waves—*R1* and *R2*—that interfere with the object beam *O* [11,15–17]. Two operating conditions are possible [18]. First, the object plane beam is linearly polarized by a polarizer (*P1* oriented at 45◦) and a quarter wave plate (*QWP1* oriented at 0◦ respect the incoming light). Due to the passage through the sample, the state of polarization of the beam *O* can change. An objective lens is used to collect the wave emitted from the sample. A polarized beam splitter (*PBS*) allows the user to obtain two orthogonal linearly polarized reference beams *R1* and *R2*, where orthogonality avoids any interference between the reference beams. Additionally a couple of polarizers or quarter waves plates (*QWP2* and *QWP3*) preserve the linear polarization when their fast axes are aligned parallel respect to the polarization states of the respective reference waves. In the second operating condition, a circularly polarized light is incident on the sample by orienting the *QWP1* fast axis angle at −45◦ (left-handed circular polarization) or +45◦ (right-handed circular polarization) respect to the polarizer transmission axis *P1*. The two orthogonal linearly polarized reference are transformed in right and left circularly polarized beams by the quarter wave plates *QWP2* and *QWP3* oriented of +45◦ and −45◦ respect to the polarization states of the two references waves, respectively. The beam splitter *BS1* allows to overlap *O*, *R1*, and *R2* beams, and the interference among these waves gives rise to the hologram that is acquired by a digital camera in an off-axis configuration, i.e., with the three waves propagating along slightly different directions (as highlighted in the inset in Figure 1). The intensities of *O*, *R1* and *R2* can be adjusted by the half-waves plates *HWP1* and *HWP2*, while the angles of incidence of *R1* and *R2* can be controlled by mirrors *M2* and *M3,* respectively. An example of the recorded digital hologram in shown in Figure 2a. In the insert, where a magnification of the interference between object and the two orthogonal reference waves is reported, the two sets of fringe patterns are clearly visible.

**Figure 1.** Basic scheme of the polarization-sensitive digital holographic imaging (PS-DHI) experimental setup. Abbreviations: M: mirrors; BS: beam splitter; PBS: polarized beam splitter; P: polarizer; QWP: quarter-wave plate; HWP: half-wave plate. In the inset, the incident directions and the polarization states of the object and reference waves are highlighted.

**Figure 2.** (**a**) Example of polarization hologram; in the inset the two fringe patterns are highlighted; (**b**) Fourier amplitude spectra of the hologram: the frequencies of the zero-order of diffraction, of the virtual and real images, and of parasitic interferences (P) are clearly visible.

Since *R1* and *R2* are orthogonally polarized, they do not interfere: *R1*·*R2*\* = *R1*\*·*R2* = 0 (where the asterisk indicates the complex conjugate), so the hologram intensity at the digital camera surface is [11,15]

$$\begin{aligned} H(\mathbf{x}, \mathbf{y}) &= (O + R\_1 + R\_2) \cdot (O + R\_1 + R\_2)^\* \\ &= |O|^2 + |R\_1|^2 + |R\_2|^2 + OR\_1^\* + OR\_2^\* + O^\*R\_1 + O^\*R\_2 \end{aligned} \tag{1}$$

In Equation (1), the first three terms correspond to the zero diffraction order, the fourth and fifth terms produce the virtual images, and the last two terms form the real images. Computing the Fourier transform of the acquired hologram, these terms appear spatially separated in the Fourier space, due to the off-axis configuration, as well shown in Figure 2b where the presence of some parasitic interferences are also highlighted [16,29]. In order to recover the information of the real images, the corresponding spectra could be selected by two different spatial filters [15,18].

As with classical holography, the reconstruction is obtained by multiply the digital hologram for a digitally computed reference wave and then the inverse Fourier transform of the spatial frequency components filtered is performed. By using the standard reconstruction algorithm on each filtered region, the two orthogonal components of the object beam (*Ox* and *Oy*), can be retrieved [15,16,18,30]. So, the amplitude map and the phase map for each polarization component can be reconstructed. With the aim to evaluate the SoP change of the object beam due to the interaction with the sample under test, typically, two parameters are experimentally measured—the amplitude ratio β, which is related to the different transmitted intensities of the two orthogonal components and corresponds to the azimuth of the polarization ellipse, and the phase difference Δϕ, that contains information on the different optical paths due to the refractive index anisotropy linked to the sample structure [18,30].

So, the amplitude ratio angle can be evaluated as

$$\beta = \arctan\left(\frac{|O\_y|}{|O\_x|}\right) \tag{2}$$

Equation (2) is obtained assuming that both the reference waves have the same intensity (|*R1*|=|*R2*|); this identity is experimentally achieved by controlling the orientation of the half-wave plates *HWP1* and *HWP2* in Figure 1 [18,30]. Regarding the phase difference between the orthogonal components of the object beam, it can be expressed as

$$
\Delta \varphi = \text{phase}(O\_y) - \text{phase}(O\_x) + \Delta \varphi\_R \tag{3}
$$

where Δϕ*<sup>R</sup>* = *phase*(*R2*) − *phase*(*R1*) can be removed by a calibrated phase difference offset superimposed to the phase difference image [31]. The evaluation of β and Δϕ allows the SoP of the sample under test to be univocally obtained; in other words, the distributions of the Jones vector at the surface of the sample under test can be retrieved from a single hologram acquisition [31]. For example, if Δϕ = 0 or π a linear polarization is retrieved, while if Δϕ = π/2 and β = π/4 or Δϕ = −π/2 and β = π/4 a circular right or left polarization, respectively, are detected. Elliptical polarization states, i.e., intermediate polarization values, can be also measured. Considering the polarization ellipse represented in Figure 3 and that corresponds to the projection of the trajectory of the extremity of the vector *O* on the plane xy, it can be characterized by the parameters ψ (orientation angle) and χ (ellipticity angle), which can be additionally evaluated by the following equations [15,17]:

$$\begin{array}{rcl} \psi &=& \frac{1}{2} \arctan\left(\frac{2|O\_x||O\_y|\cos(\Lambda p)}{|O\_x|^2 - |O\_y|^2}\right) =& \frac{1}{2} \arctan[\tan(2\beta)\cos(\Lambda p)] \\\ \chi &=& \frac{1}{2} \arcsin\left(\frac{2|O\_x||O\_y|\sin(\Lambda p)}{|O\_x|^2 + |O\_y|^2}\right) =& \frac{1}{2} \arcsin[\sin(2\beta)\sin(\Lambda p)] \end{array} \tag{4}$$

**Figure 3.** Polarization ellipse. The ellipticity is given by the ratio of the length of the semiminor axis to the length of the semimajor axis, b/a = tan(χ). The ellipse is further described by its azimuth ψ, measured counterclockwise from the x axis [10].

Regarding the direction of rotation of the vector *O*, it is defined as left-hand polarization (L-state) or right-hand polarization (R-state) depending on whether −π ≤ Δϕ ≤ 0 or 0 ≤ Δϕ ≤ π, respectively.

Finally, in a recent and very interestingly work a new compact experimental setup has been proposed as proof of concept [32]. The birefringence distribution was measured by the interference fringes based on three circularly polarized beams—two mutually orthogonal polarized (left-handed and right-handed) reference waves and a right-handed object wave. These three-beam interfering fields were obtained by monolithic gratings, so all waves were crossed at the same angle on the hologram plane, generating two sets of the fringe pattern. With this approach, all the optical elements required to obtain the three beams in the set-up illustrated in Figure 1 can be replaced by three grating vectors; moreover, in order to have practical interferometry, a monolithic grating containing three diffractive gratings positioned in a threefold-symmetric arrangement has been employed [32,33]. A schematization of both the monolithic gratings and the operating principle is illustrated in Figure 4. When a plane beam impinges on the monolithic gratings, the first-order diffractions of each grating intersect the hologram plane with the same angles. As described, analyzing the interference patterns allows the estimation of the anisotropic phase shift in the object beam. Thus, this arrangement has the potentiality to evaluate a two-dimensional birefringence distribution in a single shot and compact way.

**Figure 4.** Representation of (**a**) the monolithic gratings, (**b**) interference of the diffracted beams, (**c**) monolithic gratings with the half-wave plate (HWP) and sample of birefringent medium, and (**d**) experimental setup designed for measuring birefringence distributions (Reproduced with permission from Shimomura et al. [32], © The Optical Society, 2018).

#### *2.2. PS-DHI for Stokes Vectors or Müller Formalism*

Jones vectors cannot be used to describe light that does not remain in a single polarization state. So, to treat fully, partially, or unpolarized light, Stokes parameters (S1, S2, S3) are often used. In fact, unlike the Jones vectors, the Stokes parameters can define all kinds of SoPs of the optical beam. Unfortunately, as in the case of Jones formalism, the typical methods for estimating the Stokes parameters of an optical wave require to register several intensity distributions at different detection states via rotating the polarized optical elements. Generally, these methods require time-sequential operations using an arrangement with a rotating waveplate and a fixed analyzer. However, system error could be generated by the rotating elements, due to their inhomogeneous transmittance, and obviously, fast acquisitions are not possible [34,35]. Other proposed methods need multichannel simultaneous measurement; in this case, the amplitude or the wavefront is split into several channels, each one is analyzed by employing appropriate polarization optical components thus leading to a complicated and cumbersome dynamic measurement system [36,37]. Moreover, systematic and calculation errors are induced by a not exactly precise image matching.

Thus, it is clear that the development of imaging polarimeters in real-time without the demand for mechanical or active elements for polarization control is still a valuable aim. With this purpose, several approaches were proposed; in one of these, multiple interference patterns are produced at the surface of a digital camera and the information of a different polarized component of the optical beam under investigation is linked to fringes patterns having different spatial frequencies. The Stokes parameters can be estimated through demodulation of the obtained image by a Fourier transform approach [38]. Recently, the combination of DH with theory of the Pancharatnam–Berry (PB) phase has paved the way toward the implementation of a new method for evaluating the state of polarization of arbitrary waves with a single exposure of the interference pattern and a quick acquisition for one object wave with no moving optical components [39,40]. Pancharatnam [41] and Berry [42] introduced the so-called PB phase of an optical wave that is taken along a closed cycle on the Poincaré sphere. With this formalism, polarization transformations give rise to two optical phase retardation—one related to optical path difference (called dynamic phase) and an extra one which is equal to minus half of the solid angle subtended by the closed path on the sphere. Therefore, this extra phase, i.e., the PB phase, depending only on the geometry of the transformations' path of on the Poincaré sphere, is also called "geometric phase." Nevertheless, the PB phase occurs generally in the following two conditions: when there is a variation of polarization state in the beam propagation, and when there is a variation of the mode structure of the beam propagation [43,44]. Regarding the first condition, the PB phase and the polarization state variation are quantitatively related, giving the possibility to estimate the SoP from the PB phase measurement. Even though DH allows quantitative phase retrieve of the object beam to be performed [45,46], when the phase difference contains both the PB phase (related to the SoP) and dynamic phase (related to the optical density of the sample), these two phases are indistinguishable for the reconstruction process. To separate the two contributions, two holograms should be generated—one to retrieve the dynamic phase and the other one for the geometric phase, respectively. The dynamic phase is then used for the evaluation of the refractive index or 3D shape of the sample under investigation, whereas its interaction with the polarized light is estimated through the geometric phase.

Basically, two classes of experimental setup based on the geometric phase can be found in the recent literature to evaluate the full SoP of an optical beam. In the first, shown in Figure 5a, a triangular common-path interferometer (TCPI) is used to generate two interferograms which are aligned together on a single charge coupled device (CCD) target [40]. This is made possible by dividing the object beam into two orthogonal circularly polarized components (left-handed and right-handed) through the TCPI, and then these two components interfere with a reference beam. However, this implementation is difficult to align, leading to low measurement resolution, reduced field of view (due to separated fringe pattern for the orthogonal components recorded in two different region of the same CCD), and image matching errors. The second setup, reported in Figure 5b, is based on a hybrid

polarization-angular multiplexing digital holographic approach (PAMDH) [47], implemented by a double-channels Mach–Zehnder interferometer. In this case, two orthogonal and linearly polarized reference beams interfere with the object beam.

**Figure 5.** Experimental setup for polarization measurement—(**a**) triangular common-path interferometer (TCPI). GTP: Glan–Taylor polarizer; BS1 and BS2: beam splitter; PCS: polarization conversion system; λ/4: quarter-wave plate; PBS: polarizing beam splitter; M: mirror; RT: reversed telescope; CCD: charge-coupled device. The elements inside the dashed box form a TCPI (Reproduced with permission from Qi et al. [40], AIP, 2019); (**b**) schematic of the PAMDH system based on the geometric phase. BE: beam expander; L: lens; P: polarizer; S: sample; HWP: half-wave plate; NPBS1-NPBS2: non-polarized beam splitters; PBS1-PBS2: polarized beam splitters; M1-M3: mirrors; NDF: neutral density filter; MO: microscope objective; TL: tube lens (reprinted from Dou et al. [47]).

In both cases, the acquisition of a composite hologram generated by combining two patterns of interference fringes with distinct orientations allows the contemporary estimation of the orthogonal polarization components for an optical wave. Once these components are retrieved, the Stokes parameters can be evaluated by applying the geometric phase theory [47]. The main difference between two reported schemes is that in the setup depicted in Figure 5b the multiplexed hologram covers the whole area of the CCD, while in the solution presented in Figure 5a, a smaller field of view is achieved to avoid the influence of the change in the intensity and polarization distribution of the reference wave. For this configuration, the field of view is determined by the region of two images on CCD related to the two orthogonal components of the object wave separated at a distance controlled by the TCPI. Therefore, since the arrangement of Figure 5b is more useful to produce and control a larger field of view, for simplicity of discussion, here, the basic theory related to this scheme is reported.

In the PAMDH approach, Equation (1) becomes [47]

$$\begin{array}{lcl} H(\mathbf{x}, \mathbf{y}) &= \left(O + \mathcal{R}\_V + \mathcal{R}\_H\right) \cdot \left(O + \mathcal{R}\_V + \mathcal{R}\_H\right)^\* \\ &= |O\_V|^2 + |O\_H|^2 + |\mathcal{R}\_V|^2 + |\mathcal{R}\_H|^2 + O\_V \mathcal{R}\_V^\* + O\_H \mathcal{R}\_H^\* + O\_V^\* \mathcal{R}\_V + O\_H^\* \mathcal{R}\_H \\ &= I\_V + I\_H \end{array} \tag{5}$$

$$\begin{array}{rcl} I\nu &=& |O\nu|^2 + |\mathcal{R}\nu|^2 + 2|O\nu|\mathcal{R}\nu|\cos\varphi\nu\\ I\_H &=& |O\_H|^2 + |\mathcal{R}\_H|^2 + 2|O\_H|\mathcal{R}\_H|\cos\varphi\nu \end{array} \tag{6}$$

where ϕ*<sup>V</sup>* and ϕ*<sup>H</sup>* are the phase variations related to each of the two orthogonal components of the reference beams and object waves. Therefore, the two orthogonal complex amplitudes *OV* and *OH* of the object wave can be numerically retrieved. To reconstruct the SoP, the polarization state should be reported on the Poincaré sphere. With this aim, in Figure 6, a spherical coordinate system is shown; here, the vertical and horizontal states (V,H) are positioned on the two poles, whereas the polar and azimuthal angles are given by 2χ*<sup>1</sup>* and 2ψ*1*, respectively.

**Figure 6.** Theoretical model for estimating polarization state based on geometric phase (reprinted from Dou et al. [47]).

In accord with Pancharatnam's theory [41], the orthogonal components intensities of the object wave *IO* are given by

$$\begin{array}{rcl} I\_{\partial V} & = \left| O\_V \right|^2 = I\_{\mathcal{O}} \sin^2 \text{(\mathcal{O}H/2)}\\ I\_{\partial H} & = \left| O\_H \right|^2 = I\_{\mathcal{O}} \cos^2 \text{(\mathcal{O}H/2)} \end{array} \tag{7}$$

Once the distribution of the two components of the complex amplitude of the object wave has been retrieved, the polar angle can be evaluated as

$$
\pi \, 2\chi\_1 = \pi/2 - \hat{\mathcal{O}}\mathcal{H} = \pi/2 - 2\arctan|\mathcal{O}\_V|/|\mathcal{O}\_H|\tag{8}
$$

As previously discussed, for each polarization state *V* and *H*, the phase difference between reference and the object waves contains the contributes of both the dynamic and the geometric phase, in particular,

$$
\varphi\_V = \varphi\_{Vd} + \varphi\_{Vpb} \tag{9}
$$

$$
\varphi\_H = \varphi\_{Hd} + \varphi\_{Hpb} \tag{10}
$$

In Equations (9) and (10), the first term in the sum denotes the dynamic phase difference and the second term is the PB phase. For horizontal polarization state, the PB phase is numerically equivalent to half of the area of the geodesic triangle ROH (the red region in Figure 6), i.e., ϕ*Hpb* = Ω/2 [48]. Correspondingly, for vertical polarization state, the PB phase is given by ϕ*Vpb* = −2ψ*<sup>1</sup>* + Ω/2. So, the azimuthal angle can be evaluated as [47]

$$2\psi\_1 = (q\_H - q\_V) - (q\_{Id} - q\_{Vd}) \tag{11}$$

Taking advantage by DH, the two orthogonal components of the optical field can be obtained by

$$\begin{aligned} E\_V &= k|\mathcal{O}\_V|\mathcal{R}\_V|e^{i\varphi\_V} \\ E\_H &= k|\mathcal{O}\_H|\mathcal{R}\_H|e^{i\varphi\_H} \end{aligned} \tag{12}$$

where *k* is a constant depending on the exposure time and response of the digital camera. The holographic reconstruction approach allows evaluating the phases ϕ*<sup>V</sup>* and ϕ*H*, as a consequence the first term in brackets on the right side of Equation (11) can be calculated. Regarding the second term in brackets, Δϕ = (ϕ*Hd* − ϕ*Vd*), it is a fixed value for the adjusted PAMDH configuration and can be corrected by measuring a linearly polarized beam along 45◦ produced by a standard polarizer.

Since the two reference waves are adjusted to have the same intensity, the azimuthal and polar angles can be estimated by the following relationships:

$$\begin{aligned} 2\psi\_1 &= \arg(\mathcal{E}\_H/\mathcal{E}\_V) - \Delta q\\ 2\chi\_1 &= \pi/2 - 2\arctan(|\mathcal{E}\_V|/|\mathcal{E}\_H|) \end{aligned} \tag{13}$$

Thus, the area of Ω can be evaluated as a function of the angles 2ψ*<sup>1</sup>* and 2χ*<sup>1</sup>* [39] and the dynamic phases ϕ*Hd* and ϕ*Vd* are calculated [40,47–49]. Finally, the normalized Stokes parameters (S1, S2, S3) can be expressed as a function of (2ψ*1*, 2χ*1*) with a trigonometric relationship, as shown in Figure 6 and as described by the following relations [44]:

$$\begin{aligned} S\_1 &= \sin 2\chi\_1 \\ S\_2 &= \cos 2\chi\_1 \cos 2\psi\_1 \\ S\_3 &= \cos 2\chi\_1 \sin 2\psi\_1 \end{aligned} \tag{14}$$

The so-evaluated Stokes parameters describe the state of polarization of the object wave at the measurement area, hence they fully characterize the SoP of an arbitrarily polarized wave.

Finally, it is worth noting that the signal light from some imaging techniques, such as fluorescence imaging, is spatially incoherent, and therefore requires a further challenge for holographic imaging. Moreover, the use of light sources such as lasers, with highly temporal and spatial coherence, reduces the quality of the hologram owing to speckle and spurious fringe generation, which can decrease the spatial sensitivity of the system [50,51]. Consequently, polarization-sensitive imaging of an incoherent scene or by using partially incoherent illumination should be implemented. Unfortunately, only a couple of work proposed the incoherent polarization sensitive holography/interferometry. For example, Zhu and Shi [52] investigated a self-interference polarization holographic imaging (Si-Phi) method that allows real-time 3D imaging of an incoherent scene. The authors developed an in-line polarization holography configuration equipped with a polarization-resolving detector array; this setup allows a single shot acquisition of the complex-valued hologram and results demonstrated both 3D and real-time imaging capabilities. Even if the use of incoherent sources is still immature, future developments are expected in this field.

#### **3. PS-DHI Applications**

As described in the previous section, PS-DHI can measure the parameters β and Δϕ, thus allows to retrieve the SoP of a beam that interacts with a specimen. The modification of the SoP in the transmitted or reflected beam gives information about the structure, the composition, or the optical properties of the specimen under study. Basically, the following two physical properties of the matter can alter the polarization state of a wave [18]:


The study of the polarization state covers different applications and research fields, such as measurement of stress, geology, chemistry, display technologies, medicine and medical diagnosis, etc. Currently, it has been demonstrated that the PS-DHI technique can be used for noninvasive quantitative imaging of live cells or the evaluation of the dynamic phase difference induced by the birefringence of liquid crystals. In the following, a state of the art of the PS-DHI applications is reported in two subsections, dividing the biological from microelectronics and micro photonics applications.

#### *3.1. Microelctronics and Nanophotonic Quantitative Phase Imaging*

Since PS-DHI has been introduced in the literature, it has been applied on samples with well-known SoP, such as bent fiber [16], stressed polymer [15], waveplates [39,40], or liquid crystals (LCs) [47], just to confirm the potentiality offered by this technique. Among these applications, LC seems to be the most interestingly due to their uses in the display. For example, Park et al. [49] measured the spatially resolved Jones matrix components of the light passing through the single pixel of a liquid-crystal display (LCD) as a function of the applied voltage to the LCD panel by using PS-DHI. However, in the proposed setup, the authors need to acquire four independent interferograms with two different polarization states to reconstruct Jones matrix components map from a sample, thus the properties of PS-DHI are not fully exploited. In the published work, an in-plane switching liquid-crystal display (IPS-LCD) was characterized. However, the same approach can be used to feature also other types of LCDs—for example, full RGB channels of LCD pixels can be characterized in terms of their Jones matrix components using a DH setup with multiple lasers or in spectroscopic modality.

An LC depolarizer was characterized using PS-DHI with the scheme reported in Figure 5b by Dou et al. [47]. In this case, the LC depolarizer consists in a collection of HWPs with optical axes randomly distributed and the Stokes parameters distribution of the output beams for a linear incident polarized beam with θ = π/4 and for left-handed circularly polarized incident beam were measured, confirming the depolarizing effect induced by LC [47].

Regarding nanophotonics applications, images obtained with DHI have the disadvantage of being limited by the diffraction limit and, thus, a device in the nanometres size typically covers just a few camera pixels leading to a low resolution. On the other hand, a full field radiation pattern (i.e., polarization, amplitude, and phase) measurement at all angles gives a complete polarizability tomography of nanophotonic devices such as metasurfaces and nanoantennae. For this reason, Röhrich et al. [53] have combined Fourier microscopy, polarimetry, and digital holography, generating a signal over an entire CCD chip, for angle-resolved amplitude, polarization, and phase imaging of single nano-objects. In particular, the authors analyzed the orbital angular momentum (OAM) content of light scattered by a family of plasmonic spirals. In Figure 7 results obtained in Ref. [53] are summarized; the intensity

radiation distribution is recorded by a Fourier microscope (Figure 7a—logarithmic scale). Then, the Stokes parameters are determined by polarization-resolved imaging; these parameters completely define the SoP of the wavefront for each wave vector in the radiation pattern and, consequently, can be transformed in the polarization ellipse parameters—namely, the ellipticity and the orientation angles (Appendix A, Equation (A5)). The evaluated ellipse parameters are shown in Figure 7b and a complete helicity conversion in a doughnut-like pattern with five spiraling arms around it can be observed. Finally, by using DHI the individual phase profiles of two orthogonally polarized field components were retrieved; in Figure 7c the representation in the Fourier transform domain of a hologram corresponding to an m = −5 spiral in circular co-polarization is reported, while in Figure 7d the two evaluated phase maps related to the co- and cross-polarized channel are shown and for this latter channel, the helical shape around the optical axis is clearly visible. As suggested by Röhrich et al. [53], this approach can be applied to several nanophotonics problems such as plasmonic oligomer antennae for emission and sensing, metasurfaces for monitoring wavefronts (transmitted and reflected) as a function of the incident amplitude, phase and vector contents, and nonlinear metasurfaces whose efficiency and angular distribution depend on the phase gradients structured in the metasurface geometry.

**Figure 7.** Demonstration of combined Fourier microscopy, polarimetry, and digital holography. R-state circular polarized input and an m = −5 spiral nanostructure were used. (**a**) Fourier map of intensity; (**b**) reconstructed polarization ellipse parameters; (**c**) digital Fourier transform of an interferogram obtained with R-state circular polarized detection; (**d**) reconstructed phase maps for R-state and L-state circular polarized detection. The green and red arrows specify the input and output polarization, respectively (reprinted from Röhrich et al. [53]).

#### *3.2. Biological*

The order of molecular architecture can play a role in the dependence of the refractive index on the polarization and propagation direction of the optical beam. Indeed, the anisotropy of the refractive index, i.e., birefringence, could be related to the presence of filament arrays and/or membranes (made of a lipid bilayer that exhibits some degree of orientation) included in organelles and cells. Several pathological modifications, due, for example, to physical damage or disease such as cancer, may modify optical properties of biological tissues by altering their structure and, thus, leading to a change in their birefringence pattern [54–56]. Therefore, the detection of these modifications could become a valuable tool to identify the molecular order, follow events, and diagnose diseases. Typically, the characterization of birefringence is carried out by using quantitative polarized light microscopy [57–59], polarimetry [60–63], and experimental determination of the Müeller matrix [64–66]. Since these techniques require the acquisition of several images to retrieve the birefringence of the sample, they appear too slow for living semi-transparent biological sample imaging.

For this reason, PS-DHI may have a good chance of being used as a label-free and fast technique (only one acquired image) for the study of the SoP of biological samples. However, this technique has started to be used for biological imaging only in the last few years; therefore, the scientific papers on this field of application present in the literature are still a low number. For example, Wang et al. [67] studied the birefringence distribution of biological tissues by using polarization-dependent phase-shifted holograms. Even if the authors used a setup based on a modified Mach–Zehnder interferometer, which allows the recording polarization holograms by rotating a polarizer and so requiring multiple acquisitions, this approach takes advantage by DH and, thus, can be exploited to have only one image acquisition. Interestingly, the results demonstrated that the median birefringence value of cancerous bladder tissues is higher than that of the normal bladder tissues. Hence, this approach can be effectively used to discriminate between cancerous and non-cancerous tissues.

PS-DHI in a configuration similar to that shown in Figure 1 has also been used to distinguish among three different B-leukemia transformed cell lines, providing a diagnosis method of acute lymphoblastic leukemia type B, a cancer with a high mortality rate that affects B lymphocytes [68]. The same approach has been applied also to human sperm cells [30]. In fact, in normal morphological human sperm cells heads, due to longitudinally oriented protein filaments, there is a strong birefringence [69]. When the acrosome reaction occurs, i.e., spermatozoa are ready to approach the egg, the local protein organization disaggregates and, as a consequence, there is a variation in the intrinsic birefringence properties. In fact, reacted spermatozoa show a partial head birefringence, typically in the post-acrosomal region [70]. In the paper published by De Angelis et al. [30], PS-DHI is proposed in a configuration combined with Raman spectroscopy (RS) for a complete, accurate and label-free estimation of the biological proprieties of fixed air-dried sperm. Indeed, PS-DHI provides quantitative information on the cell morphology, motility and SoP [8,71], whereas the RS technique gives complementary specific biochemical fingerprint of the sample, without harming the integrity of live specimens [72]. In Figure 8a,b, the amplitude parameter β and phase difference Δϕ relative to a control sperm cell and to reacted sample retrieved by PS-DHI are reported. It is evident that Δϕ map shows a birefringence distribution (bright pattern) over all the sperm head in the control sperm cell, while the Δϕ map for reacted sperm cell presents a reduction of the birefringent distribution, that is confined in the post-acrosomal area. The acrosome reaction was induced by a heparin treatment. A statistical analysis of the distribution of birefringence patterns of sperm from three donors exposed to heparin for 0 h (control sample) and 4 h (reacted sample) was performed, and the results are resumed in Figure 8c. By combining PS-DHI and RS, the authors proposed a new fully label-free protocol for the recognition of healthy and reacted sperm cells. In detail, sperm cells with a head entirely birefringent are selected by PS-DHI, assuring their integrity; then, the heparin treatment was performed on these chosen spermatozoa to induce the acrosome reaction. Finally, the effectively reacted spermatozoa are selected by estimating again their polarization state by the PS-DHI combined with the study of their Raman spectra. This interesting combined approach leads to the identification of spermatozoa in which the modification in their

birefringence distribution is imputed only to the acrosome reaction, while those in which this variation is correlated with defects are not considered [30].

In 2019, Gordon et al. [73] proposed a proof of concept of a holographic fiberscope that allows producing full-field images of amplitude and quantitative phase in two polarizations, using a novel parallelized transmission matrix approach. The polarimetric imaging of birefringent and deattenuating samples was carried out to verify the feasibility of this approach. Due to their small diameter and high flexibility, imaging through optical fibers is already implemented in biomedical endoscopy and industrial inspection. Therefore, the introduction of a holographic fiberscope for birefringence measures appears very interesting for biological applications and remote sensing.

**Figure 8.** (**a**) Amplitude parameter β and phase difference Δϕ relative to a control sperm cell (0 h in heparin). Colorbars indicate the mapping of the phase variation. (**b**) The same polarization parameters measured for a reacted sample (4 h in heparin). (**c**) Distribution of birefringence patterns of sperm from three donors exposed to heparin for 0 h (control sample) and 4 h (reacted sample). Scale bar: 4 μm (adapted from De Angelis at al. [30]).

An application worthy of note is that studied by Öhman et al. [74,75]. They developed a dual-view polarization-resolved digital holographic systems for particle tracking [74] as well as for define if a particle has a spherical shape or not and to estimate its size [75]. This novel system allows to image the same volume from two perpendicular directions, giving information about the amplitude ratio angle β from two views. The authors found that the size of a non-spherical particle can be estimated from β with an upper limit of about nm. This approach could be very useful to detect and distinguish different particles, including biological particles, under different flow conditions by estimating their polarization response.

Finally, the instantaneous (single-shot) measure of the spatial variations of the phase retardance induced in either geometric or dynamic phase is carried out through an alternative approach presented in Ref [76], where a quantitative fourth-generation optics microscopy (Q4GOM) has been developed; even though this approach doesn't characterize the full SoP, thanks to its unique optical performance, it can open new research in diagnosis of optical composite nanostructures or biomolecular sensing. The phase restoration is based on the self-interference of optical wave and is achieved in an intrinsically stable common-path setup. Basically, in the live cell imaging an add-on fourth-generation (4G) optics imaging module is combined to a polarization adapted interference microscope, as shown in Figure 9a. A light linearly polarized in the same direction of the azimuth of the compensating quarter-wave plate QWP1 and obtained by the input polarizer P1, enters in a Mirau microscope objective (MMO) by the beam splitter cube BS2. Therefore, the object (reflected from the sample) and reference (reflected from the reference mirror M) beams are orthogonally linear polarized after passing twice through QWP1 and QWP2 (see Figure 9b). Another quarter-wave plate (QWP3) transforms the orthogonal linear polarizations into L-state and R-sate circular polarizations. The light collimated by the MMO is focused by the tube lens TL, whose back focal plane corresponds to the input plane of the add-on 4G optics module. Here the L-state and R-sate circular polarizations images created in the sample and reference path overlap, and a polarization directed geometric-phase grating (GPG), with a spatial period Λ = 9 μm (corresponding to 2π rotation of the anisotropy axis) is positioned. The polarization state of the object and reference beams is changed from L-state to R-state circular polarization and vice versa by passing through the GPG, whereas the geometric phase changes as ±2ϕ, where ϕ is the periodic spatial change of the angular orientation of the anisotropy axis (see Figure 9c). This geometric-phase modulation leads to a tilt of the object and reference waves with the orthogonal circular polarizations in directions of +1st and −1st diffraction order with the mutual angle of 8◦ for the central wavelength. The polarizer P2 and the lens L2 give the polarization projection and the Fourier transform, respectively; then, the off-axis hologram is recorded on the CCD. By using an optical path difference compensator to the back focal plane of the lens L1, the length of the object and reference beams optical paths can be aligned, allowing the successfully use of the MMO in biological experiments using broadband light. The Q4GOM has been tested for quantitative imaging of diverse cells classes: human cheek cells, blood smear and spontaneously transformed rat embryonic fibroblast cells. As example, the images obtained for human cheek cells are reported in Figure 9d. Results are very impressive, since they demonstrated an accuracy well below 5 nm, opening new research directions in the quantitative retardance imaging of anisotropic biological samples [76].

**Figure 9.** Illustration of quantitative fourth-generation optics microscopy (Q4GOM). (**a**) Experimental setup using 4G optics module connected to microscope with a polarization adapted interference objective. P1: input polarizer; IL1, IL2: illumination lenses; MMO: Mirau microscope objective; BS1: pellicle beam splitter; QWP1, QWP2, QWP3: quarter-wave plates; M: reference mirror; BS2: beam splitter cube; TL: tube lens; GPG: geometric-phase grating; L1: first Fourier lens; P2: analyzer; L2: second Fourier lens; CCD: charged coupled device. (**b**) Polarization-adapted Mirau microscope objective (MMO) used for imaging of isotropic samples. (**c**) Polarization sensitive transformation of light by geometric phase grating. (**d**) The quantitative phase retardance imaging of human cheek cells. At the top a comparison of the quantitative phase imaging (left) and bright field image (right) of the marked area (adapted from Bouchal et al. [76]).

#### **4. Conclusions**

PS-DHI is a flexible, useful development of DHI; indeed, only a few changes to the standard DH setup are required to obtain polarization-based imaging. However, innovative solutions were also developed. The requirement of a single-shot imaging and high processing speed significantly improve the operation of the measurement process making this approach more appropriate for real-time multiple analyses. Moreover, the full SoP and phase distribution for an arbitrary light field can be easily and quickly measured by PS-DHI based on the geometric phase.

In this context, this review paper presents a brief introduction to the basic principles underlying PS-DHI and an overview of some enhancements in its technology development. To the best of our knowledge, there are no other reviews on this topic. Therefore, it is our belief that this work could help researchers who work in this field. Even if PS-DHI is a fairly established research line (the first work proving its feasibility dates back to 1999 [17], while in the past 20 years, many works have been published to introduce improvements to the technique), there are currently only a few applications presented in the literature. Among these, the most promising are in the fields of microelectronics, photonics and biomedical imaging. Since it has been demonstrated with other more complex techniques that birefringence and, in general, SoP modification induced by biological and electronics samples can

indicate their status (e.g., the healthy state of some cells [54–56,69,70] or the stress–strain induced in some materials [15,16]), and considering the achievements of PS-DHI in microelectronics, photonics, and biomedical imaging of the past few years, new technological developments, such as the use of quantum holography, which is a recent fascinating line of research, and new potential applications are expected in the next years.

**Author Contributions:** Conceptualization, M.A.F. and G.C.; investigation, M.A.F. and G.C.; data curation, M.A.F. and G.C.; writing—original draft preparation, M.A.F. and G.C.; writing—review and editing, M.A.F. and G.C.; supervision, M.A.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **Appendix A**

#### *Polarization of Light*

Polarization of light defines the geometrical orientation of the oscillations of electromagnetic waves. In a transverse wave, the oscillation is in the perpendicular direction respect to the direction of propagation of the beam. When the field vector components along the x and y directions generates a linear trajectory over the time, polarization is called linear, whereas when the tip of the field vector describes a circle or an ellipse in any fixed plane intersecting, and normal to, the direction of propagation, the polarization is classified as circular or elliptical, respectively. The rotation can occur in two possible directions: right circular polarization if the fields rotate in a right-hand sense with respect to the propagation direction, or left circular polarization if the fields rotate in a left-hand sense. Polarization of light can be described with the following two different formalisms [77]:


Basically, in the Jones formalism, considering propagation along the z axis, the electric field can be written as *E* = *Ex* + *Ey*, where

$$
\begin{pmatrix} E\_x(z,t) \\ E\_y(z,t) \end{pmatrix} = \begin{pmatrix} A\_x \cos(\omega t - kz + \delta\_x) \\ A\_y \cos(\omega t - kz + \delta\_y) \end{pmatrix} = \begin{pmatrix} A\_x e^{i\delta\_x} \\ A\_y e^{i\delta\_y} \end{pmatrix} e^{i(\omega t - kz)}\tag{A1}
$$

Here *i* is the imaginary unit. The components can be writing as a column vector, which is called Jones vector.

$$J = \begin{pmatrix} A\_x e^{i\delta\_x} \\ A\_y e^{i\delta\_y} \end{pmatrix} \tag{A2}$$

The state of polarization of an optical wave can be expressed in terms of the amplitudes (*Ax*, *Ay*) and the phase variations (δ*x*, δ*y*) of the x and y components of the electric field vector. Hence, the polarization state of a light beam is completely described by the complex amplitudes in Equation (A2). When a polarized wave with field vector *E* is incident on a polarization-changing object, the emerging wave has another polarization state *E1* given by

$$
\begin{pmatrix} E\_{1x} \\ E\_{1y} \end{pmatrix} = \begin{pmatrix} j\_{11} & j\_{12} \\ j\_{21} & j\_{22} \end{pmatrix} \begin{pmatrix} E\_x \\ E\_y \end{pmatrix} \tag{A3}
$$

where the 2 × 2 transformation matrix is called the Jones matrix. If the optical wave travels through different optical components, the resulting Jones vector can be evaluated by multiplying a cascade of Jones matrices to the input vector, *JNJN-1* ... *J2J1E*, where *Ji* represents the polarization properties of *i*-th element [78].

In the case of Stokes formalism, the polarization of light is described by four factors related to intensity and polarization ellipse parameters as described in Figure A1 and in the following equations:

$$\begin{array}{lcl} S\_0 & = & I \\ S\_1 & = & Ip\cos 2\psi \cos 2\chi \\ S\_2 & = & lp\cos 2\chi \\ S\_3 & = & lp\sin 2\chi \end{array} \tag{A4}$$

where *Ip*, 2ψ and 2χ are the spherical coordinates of the polarization state in the three-dimensional space for the *S1*, *S2* and *S3* parameters, *I* is the total intensity of the beam, and *p* is the degree of polarization given by *S*2 1+*S*<sup>2</sup> 2+*S*<sup>2</sup> 3 *<sup>S</sup>*<sup>0</sup> , constrained by 0 ≤ *p* ≤ 1. Generally, normalized Stokes vector, obtained by normalizing to the total intensity *S0*, is used and the three significant Stokes parameters are plotted on a spherical region. The parameter *S*<sup>1</sup> describes the dominance of linear horizontal polarized (LHP) light over linear vertical polarized (LVP) light; *S*<sup>2</sup> describes the preponderance of linear +45◦ polarized (L + 45P) light over linear −45◦ polarized (L − 45P) light and *S*<sup>3</sup> describes the dominance of right circular polarized (RCP) light over left circular polarized (LCP) light [79]. For pure polarization states, the normalized vector is situated on the Poincaré sphere with unity-radius, while in case of partially polarized states the normalized vector will be placed inside the unity radius Poincaré sphere at a distance of *p* from the origin.

**Figure A1.** (**a**) Polarization ellipse, showing the relationship to the Poincaré sphere parameters ψ and χ. In particular, the orientation angle ψ is the angle between the major axis of the ellipse and the *x*-axis along with the ellipticity ε = a/b, the ratio of the ellipse's major to the minor axis (also known as the axial ratio). The ellipticity parameter is an alternative parameterization of the ellipticity angle, χ = arctan(b/a) = arctan(1/ε) [15,79]. (**b**) The Poincaré sphere is the parameterization of the last three Stokes' parameters in spherical coordinates.

The orientation and ellipticity angles, ψ and χ, associated with the polarization ellipse can be related to the Stokes parameters associated with the Poincaré sphere as follows [80]:

$$\begin{array}{ll} \psi = \frac{1}{2} \tan^{-1} \left( \frac{S\_2}{S\_1} \right), & 0 \le \psi \le \pi \\\ \chi = \frac{1}{2} \sin^{-1} \left( \frac{S\_3}{S\_0} \right), & -\frac{\pi}{4} \le \chi \le \frac{\pi}{4} \end{array} \tag{A5}$$

Following the Müeller formalism, the Stokes components at the output of a given optical sample can be modelled as *S* = *M*sample *S*, where *S* and *S* are the input and output Stokes vectors, respectively, and *M* is the 4 × 4 Müeller matrix of the sample,

$$
\begin{pmatrix} S\_0' \\ S\_1' \\ S\_2' \\ S\_3' \end{pmatrix} = \begin{pmatrix} M\_{11} & M\_{12} & M\_{13} & M\_{14} \\ M\_{21} & M\_{22} & M\_{23} & M\_{24} \\ M\_{31} & M\_{32} & M\_{33} & M\_{34} \\ M\_{41} & M\_{42} & M\_{43} & M\_{44} \end{pmatrix} \begin{pmatrix} S\_0 \\ S\_1 \\ S\_2 \\ S\_3 \end{pmatrix} \tag{A6}$$

Similarly to the Jones calculus, when a polarized light wave passes through several optical objects, the polarization of the outgoing beam can be evaluated by knowing the Stokes vector of the input wave and applying Müeller calculus that needs a Müller matrix for each crossed optical object; the resulting vector contains the Stokes parameters of the light outgoing the system [80].

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

### *Article* **Analysis of Pulses Bandwidth and Spectral Resolution in Femtosecond Stimulated Raman Scattering Microscopy**

**Luigi Sirleto 1, Rajeev Ranjan 1,2 and Maria Antonietta Ferrara 1,\***


**\*** Correspondence: antonella.ferrara@na.isasi.cnr.it

**Featured Application: Stimulated Raman microscopy, based on two femtosecond pulsed lasers and with a spectral resolution of about 56 cm***<sup>−</sup>***1, is demonstrated to be sufficient in order to distinguish protein and lipid bands in the C-H region.**

**Abstract:** In the last decade, stimulated Raman scattering (SRS) imaging has been demonstrated to be a powerful method for label-free, non-invasive mapping of individual species distributions in a multicomponent system. This is due to the chemical selectivity of SRS techniques and the linear dependence of SRS signals on the individual species concentrations. However, even if significant efforts have been made to improve spectroscopic coherent Raman imaging technology, what is the best way to resolve overlapped Raman bands in biological samples is still an open question. In this framework, spectral resolution, i.e., the ability to distinguish closely lying resonances, is the crucial point. Therefore, in this paper, the interplay among pump and Stokes bandwidths, the degree of chirp-matching and the spectral resolution of femtosecond stimulated Raman scattering microscopy are experimentally investigated and the separation of protein and lipid bands in the C-H region, which are of great interest in biochemical studies, is, in principle, demonstrated.

**Keywords:** stimulated Raman microscopy; pulsed source; laser pulse bandwidths; laser chirping; spectral resolution

#### **1. Introduction**

Over the past ten years, stimulated Raman scattering (SRS) microscopy has been investigated in nanophotonics [1–4] as well as in biophotonics as an analytical, label-free, noninvasive technique with unique cellular and tissue imaging capabilities [5–8]. As almost all the biomolecules contain carbon and hydrogen, the CH-stretching (2800–3100 cm−1) region of Raman spectra of biomolecules is the most used in SRS microscopy. The two Raman bands typically investigated are CH2, near to 2845, and CH3, near to 2930 cm−1, corresponding to lipids and proteins, respectively. Due to their large spectral shapes (about 100 cm−1) and the difference between peaks of 95 cm−1, the CH2 and CH3 are partially overlapped. Moreover, in the C-H region, the SRS signal level is high because the density of CH bonds is high, while the SRS molecular specificity is assumed to be low.

Typically, SRS microscopy is implemented by using two Fourier transform-limited (FTL) tunable picosecond (ps) laser sources with a high spectral resolution (≈10 cm−1), helpful in the region of interest (i.e., the fingerprint region: 800 ÷ 1800 cm−1), where Raman peaks are narrow, nearly spaced, and could be packed [8]. However, when ps laser pulses are used to implement an SRS microscope, an equally fruitful imaging in carbon–hydrogen (C-H) stretching is achieved. The drawback is that ps pulses show a low peak intensity, thus needing high laser power for imaging [5–8]. In the last decade, an improvement of about one order of magnitude of the SRS signal has been demonstrated when ps pulses are replaced with femtosecond (fs) pulses [9]. This improvement is due

**Citation:** Sirleto, L.; Ranjan, R.; Ferrara, M.A. Analysis of Pulses Bandwidth and Spectral Resolution in Femtosecond Stimulated Raman Scattering Microscopy. *Appl. Sci.* **2021**, *11*, 3903. https://doi.org/10.3390/ app11093903

Academic Editors: Bernhard Wilhelm Roth and Andrés Márquez

Received: 15 March 2021 Accepted: 23 April 2021 Published: 26 April 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

to a higher peak intensity; thus, higher signal-to-noise ratio (SNR) can be obtained when temporally shorter pulses are used than narrowband, picosecond pulses with an unchanged optical power. However, when ultra-fast sources are used, a low spectral selectivity is obtained and multi-band excitation can occur, not allowing, in principle, the separation of some bands of particular interest in biology, such as those of lipids and proteins as discussed previously. To solve this issue, a number of methods for SRS multicolor imaging, based on broadband femtosecond pulses, have been developed. Among them, frequency tuning [10], multiplexing [11,12] and spectral focusing [13–16] implementations have been studied and reported in the literature.

It is well known that the light pulse is considered transform-limited when the angular frequency is constant and equals the central angular frequency ω(t) = ω0. On the other hand, the chirp of an optical pulse is generally understood as the time dependence of its instantaneous frequency; thus, a chirped pulse having a carrier frequency ω<sup>0</sup> at time t shows an instantaneous central frequency ω(t) that depends on the linear chirp parameter β by the equation: ω(t) = ω<sup>0</sup> + 2βt [17]. In detail, a down-chirp (up-chirp) means that the instantaneous frequency decreases (increases) with time. A pulse can gain a chirp, for example, through propagation in a transparent medium due to the effects of chromatic dispersion and nonlinearities. Indeed, due to its wide spectral width and to group velocity dispersion, optical pulse propagating in a transparent medium undergoes a phase distortion inducing an increase in its duration with a different laser frequency distribution in time, as reported in Figure 1. In particular, the time–bandwidth product Δω0·τ<sup>0</sup> = 4 ln 2 ≈ 2.77 corresponds to the area of the ellipse on the left in Figure 1 and it is the bandwidth of an FTL laser pulse. As a result of the chirp, laser pulses can undergo a temporal stretch and the final pulse duration is τ = *F*τ0, where τ<sup>0</sup> is the FTL pulse duration and *F* is the stretching factor; at the same time, the instantaneous spectral bandwidth becomes narrower than the FTL spectral bandwidth by a factor of 1/*F* [18]. With the relation Δω·τ = 2.77 also being applicable to the chirped pulse width τ and to the instantaneous bandwidth Δω of the pulse, the duration broadening leads to a decrease in the instantaneous bandwidth by the stretching factor, whereas the whole bandwidth Δω<sup>0</sup> is left unchanged [19], as depicted in Figure 1.

**Figure 1.** Time-bandwidth distribution for (**a**) Fourier-transform limited laser pulse and (**b**) the same laser pulse that is linearly chirped.

In order to enhance spectral resolution in SRS microscopy based on fs laser pulses, an option is to force a quadratic spectral phase variation. By equally chirping pump and Stokes beams with an energy spacing corresponding to the Raman line, it is possible to generate a constant instantaneous frequency difference (IFD, Ω = ω<sup>p</sup> − ωs) that spectrally focuses the excitation energy into a single resonance. We note that the bandwidth δΩ, which ultimately determines the SRS spectral resolution, in the limiting case can simulate the ps SRS system. This method is known as spectral focusing (SF) [13–16]. Nevertheless, the great disadvantage related to this approach is the large amount of parameters that should be taken into account in the selection and alignment of the optics to obtain the chirp-matching condition. Moreover, since the operative conditions can be altered by

fluctuations in the pump and Stokes wavelengths together with the dispersion in the microscope, perfect chirp-matching is difficult to maintain. For these reasons, the resulting spectral resolution of SF-SRS setups is often worse than theoretically predicted [13–16].

The comparison among FTL, equally and differently chirped laser sources is shown in Figure 2. In the spectral focusing, when the pump and Stokes pulses are "chirp-matched", the bandwidth of IFD is lower than the total bandwidths of the transform-limited pulses, leading to an improvement of the spectral resolution, as reported in Figure 2b. On the other hand, when the chirp-matching condition is not met, the bandwidth of the IFD signal is broader, and, as a consequence, the spectral resolution is poorer with respect to the chirp matched case (see Figure 2c). Interestingly, spectral resolution for differently chirped lasers is better than FTL laser sources [19].

**Figure 2.** Coherent Raman excitation with (**a**) FTL laser pulses, (**b**) laser pulses with the same chirp, (**c**) laser pulses with different chirps. Ω is the vibration frequency given by Ω = ω<sup>p</sup> − ωs.

For microscopy, both contrast imaging and spectral tuning are important to assess the good quality of images. However, in coherent Raman scattering, there is a tradeoff between the best spectral resolution, which is reached with ps sources, and the best ratio of image contrast and signal intensity, which is obtained when the spectral resolution and the width of the Raman lines under observation are almost the same [14]. This condition is satisfied in the case of excitation with both ps pulses, since they match the linewidths in the fingerprint region (5–20 cm<sup>−</sup>1), and in the case of broader bandwidth femtosecond (fs) pulses, which are considered the ideal excitation for CH stretching vibrations.

We note that the SF approach usually takes advantage by static dispersive elements, such as glass rods, placed into the optical path to stretch both the pump and the Stokes frequencies in time and produce a constant IFD. However, the optical elements embedded in an SRS optical circuitry microscope setup can also change the pulse width. In this paper, taking advantage of the small chirping, introduced by simply propagating the beams through dispersive materials already present in the SRS microscope setup, we examine the spectral correlations and we demonstrate that its value is enough to distinguish protein and lipid bands in the C-H region.

#### **2. Experimental Setup and Methods**

Our optical system, not commercially existing and schematically shown in Figure 3, is obtained by integrating a femtosecond-SRS spectroscopy setup [20–22] with a C2 con-

focal Nikon microscope, which consists of an inverted Nikon Ti-eclipse microscope and a scan head. Experimental details of our setup have already been reported in our previous papers [22–25]. Two femtosecond laser sources are involved in this setup: a Ti:Sapphire (Ti:Sa—Chameleon Ultra II—pulse duration = 140 fs, repetition rate = 80 MHz, emission wavelengths range = 680–1080 nm) and a femtosecond synchronized optical parametric oscillator (OPO—Chameleon Compact OPO—pulse duration = 200 fs, repetition rate = 80 MHz, emission wavelengths range = 1000–1600 nm).

**Figure 3.** Experimental setup for stimulated Raman scattering imaging. (Ti:Sa): Titanium-Sapphire pulsed laser source; OPO: optical parameter oscillator; M: mirror; EOM: electro-optical modulator; GM: galvo-mirror; DM: dichroic mirror; OBJ: objective lens; PD: photo-detector; FFFC: flip flop fiber coupler; S: multimode optical fiber; OSA: optical spectrum analyzer.

Typically, the temporal characterization of a signal is performed by measuring the correlation between the signal and its duplication. In particular, optical autocorrelation of the field intensity may be used to measure the second-order coherence degree and to assess the duration of short pulses. With this aim, an additional flip-flop mirror has been mounted in-between the mirror M4 and the input of the scan head in our optical architecture to deflect the laser pulse beams directly toward an autocorrelator (pulseCheck 50–A.P.E., Berlin) (see Figure 3); therefore, auto- and cross-correlation of the pulse beam were measured in the optical path just before the laser beam reaches the microscope. In an autocorrelator, the input pulsed beam is divided into two arms of a Michelson interferometer and a delay can be induced in the pulses in one arm with respect to the other one. Both pulses are then overlapped in a non-linear crystal and the generated signal is detected. The pulse duration can be evaluated from the measurement of the time delay and the intensity of the generated signal [26]. The detectable pulse width range is fixed by the delay range, whereas the measurable wavelength range is determined by the detector and the non-linear material. Here, the nonlinear process is the two photons absorption (TPA) and it is measured as a function of the time delay giving the beam autocorrelation function. Auto-correlators based on TPA have some important advantages: (i) a higher sensitivity can be obtained with respect to second harmonic generation (SHG) due to the involvement of fields only at the original frequency, ω, owing to the TPA resonant second-order transition nature; (ii) extremely short pulses can be characterized since TPA can operate in a wide wavelength range not restricted by the narrow phase matching bandwidth; (iii) in TPA, non-linear signal multiplication and detection are combined into one process, leading

to a simplification and a higher efficiency with respect to a two-step process of optical non-linearity followed by linear detection.

The used pulseCheck can be used in two measurement modes: collinear and noncollinear. Collinear, also known as interferometric or fringe-resolved mode, provides further qualitative information on the chirp and central wavelength of the pulse. Regarding non-collinear mode, it allows high dynamic range measurements that are background free [27].

The auto- correlator was used with a pulse width measurement range of (10 fs–12 ps) and it was connected to the PC through USB and by using the APE's Standardized Software Interface, allowing either remote control or integration into automated setups. Therefore, the deflected beam is tuned until the intensity is stabilized and maximized, and then, the pulse is acquired and the data analysis is completed by using the Gaussian fitting curve function *cftool* of MATLAB 2020.

To evaluate the spectral resolution, the exact estimation of the pulses' duration is required. Normally, the full width at half-maximum (FWHM) of the unknown pulse τ*<sup>p</sup>* is proportional to the FWHM of the measured fringe-resolved intensity autocorrelation function τ*ac*:

$$
\pi\_{ac} = k \cdot \pi\_p \tag{1}
$$

where *k* is the proportionality factor, also known as the deconvolution factor. *k* differs significantly for different pulse shapes; therefore, to evaluate the pulse width from the intensity autocorrelation requires some previous knowledge of the pulse shape. Typically, the deconvolution factor can be calculated for analytical pulse shapes or computed numerically for complicated pulses; however, for some common pulse shapes, the deconvolution factor is known (*k* = 1.414 for Gaussian shape, *k* = 1.543 for sech shape, *k* = 1 for square shape) [28].

It is well known that the Fourier transformation correlates β, which defines the linear slope of the central frequency, to the group delay dispersion (GDD) applied to the laser pulse as reported in the following [19]:

$$\beta = \frac{2GDD}{\pi\_0^4 + 4GDD^2} \approx \frac{1}{2GDD} \text{ for } \pi \gg \pi\_0 \tag{2}$$

The GDD leads to a stretch of the pulse width, which passes from τ<sup>0</sup> in the case of the FTL pulse to the chirped width τ:

$$\tau = \tau\_0 \sqrt{1 + \left(\frac{4 \ln 2 \cdot GDD}{\tau\_0^2}\right)^2} \approx 2.77 \frac{|GDD|}{\tau\_0} \text{ for } \tau \gg \tau\_0 \tag{3}$$

From Equations (2) and (3), we found that τ·τ<sup>0</sup> ≈ 4 ln 2|*GDD*| ≈ 2 ln 2/|β|; this product is considered a direct measure of the chirp parameter β for τ τ<sup>0</sup> [19]. We note that two chirped pulses have the same slope only if the resultant products τ·τ<sup>0</sup> are equal.

Information obtained by autocorrelation measurements allows us to monitor the pulse duration and chirp of the laser beam, which are very important parameters to optimize the non-linear interaction in microscopy. Moreover, considering that SRS is a two-pulse technique, its spectral resolution is not defined by the spectrum of the individual exciting pulses but by the spectrum of their temporal interference. Consequently, crosscorrelation characterization is equally important for providing information about the entire system; in particular, the FWHM of pump and probe beams' cross-correlation allows us to evaluate the experimental spectral bandwidth [29]. Typically, SF is obtained by equally chirping the pump and the Stokes pulses by using glass elements of known group-velocity dispersion without significant intensity losses [30]. As a general rule, the spectral resolution is restricted both by the level to which the pulses are chirped and by the similarity of these chirps. A better spectral resolution can be obtained when the pump and Stokes pulses with

larger bandwidths are chirp-matched. In the case of transform limited laser pulses, the spectral resolution can be calculated by the formula [19]:

$$
\Delta\widetilde{\nu} = \frac{2\ln 2}{\pi c} \sqrt{2\left(\tau\_p^{-2} + \tau\_\text{S}^{-2}\right)} = 20.8 \text{ ps} \cdot \text{cm}^{-1} \sqrt{\tau\_p^{-2} + \tau\_\text{S}^{-2}}\tag{4}
$$

and in the case of our fs laser sources its evaluated value is of 181 cm−1. In order to carry out cross-correlation between Ti:Sa and OPO, both sources are focused inside the TPA detector, then by inserting an optical delay line (Newport MOD MILS200CC) between the Ti:Sa and the microscope, we introduce an optical delay in the Ti:Sa beam; at each step of the delay line, the signal is acquired.

Finally, to have a complete characterization of the pulsed sources used in our SRS microscope, we have carried out laser spectra measurement by adding a flip flop fiber coupler (FFFC in Figure 3) in the optical setup without disturbing it, and the deflected beam is coupled in an optical spectrum analyzer (OSA—Ando AQ6317C) by a multimode optical fiber S, which is mounted after the flip/flop mirror. In Figure 3, the setup used integrated with the described system for complete characterization of the pulsed laser sources is reported.

#### **3. Results and Discussion**

Autocorrelation characterizations of the pulsed laser beams were examined using the aforementioned autocorrelator. The Ti:Sa and OPO laser emission wavelengths were fixed at 811 nm and at 1074 nm, respectively. The collected autocorrelation function of the Ti:Sa and OPO is reported in Figure 4a,b, respectively. As can be seen in these figures, the pulses emitted by the lasers show a Gaussian-like distribution, as expected. Therefore, with the aim to evaluate the pulse width value, both the autocorrelation traces were Gaussian-fitted as displayed in Figure 4a,b, and the corresponding FWHM of the Gaussian curves was calculated to be about 341 fs and 357 fs for Ti:Sa and OPO, respectively. Since the pulse follows a Gaussian line shape and considering Equation (1), the factor of 1.414 was applied to our evaluation, leading to an estimation of the pulses' width duration at the input of the microscope of about τ = 241 fs for Ti:Sa and τ = 253 fs for OPO, respectively. Thus, a small broadening of OPO was observed, while a major one was retrieved for Ti:Sa. These broadenings are related to the dispersion arising during propagation through several optical elements along the beams' path; in particular, the higher widening observed for Ti:Sa can be explained considering that in its optical path a Pockels cell is employed. In Table 1, the features of the two lasers and results of their autocorrelation measurements are summarized.

**Figure 4.** Autocorrelation trace of the pulsed laser sources (**a**) Ti:Sa and (**b**) OPO, respectively. Results of the Gaussian fit are also reported.


**Table 1.** Pulsed laser sources properties.

The second order group velocity dispersion can be evaluated for each pulsed source by applying Equations (2) and (3). The obtained values are GDD = 9905 fs2 and GDD = 11,177 fs2 for Ti:Sa and OPO, respectively. In our case, pulses were not equally chirped.

To complete the single beam characterization, the laser spectra were acquired by an optical spectrum analyzer and results are shown in Figure 5, when Ti:Sa is tuned to 811 nm and OPO to 1074 nm, respectively. The resultant bandwidths are approximately the same for both pulsed laser sources and were of about 6.1 nm.

**Figure 5.** Optical spectra of the pulsed laser sources (**a**) Ti:Sa and (**b**) OPO, respectively. Results of the Gaussian fit are also reported.

The measured FWHM of Ti:Sa and OPO cross-correlation was 371 fs; since the convolution of two Gaussian functions is another a Gaussian function [31], Equation (1) can also be applied when cross-correlation measurements are performed, leading to a pulse duration of 262 fs (see Figure 6). Thus, the obtained experimental spectral bandwidth, which is given by the FWHM of cross-correlation in the frequency domain, was of 56 cm<sup>−</sup>1.

**Figure 6.** Cross-correlation measure and its Gaussian fit.

The retrieved spectral resolution is very useful in biological sample imaging when both lipid and protein bands need to be investigated by SRS microscopy by simply regulating the frequency either of the pump or the Stokes beams in successive scans. The corresponding two stretching signals are CH2 (2845 cm<sup>−</sup>1) and CH3 (2940 cm<sup>−</sup>1), respectively; thus, these bands can be collected by choosing one Raman shift at a time, allowing the imaging of the map distributions of the lipid and protein on the same field of the sample. Therefore, since our experimental spectral bandwidth was of 56 cm<sup>−</sup>1, when we set the laser beams to excite the 2845 cm−<sup>1</sup> lipid band, we are exciting from (FWHM) 2817 to 2873 cm−1; in the same way, when we set the laser beams to the protein band at 2940 cm−1, effectively we excite the range 2912–2968 cm−<sup>1</sup> (FWHM). In Figure 7, the spectral bandwidth derived by the chirping of the pulses (56 cm−1—continuous lines) and two simulated Raman bands with two peaks at a distance of 95 cm−<sup>1</sup> with a width of about 100 cm−<sup>1</sup> (red circles and green diamonds) are illustrated.

**Figure 7.** Raman bands for CH2 (2845 cm<sup>−</sup>1, red circles) and CH3 (2940 cm<sup>−</sup>1, green diamonds). Continuous lines are obtained considering the Ti:Sa and OPO cross-correlation reported in Figure 6 at the input of the microscope (i.e., 262 fs). Blue lines highlight the overlap area between two excited bandwidths.

The overlap between two excited bandwidths is highlighted with blue lines; however, the contribution to the Raman signal from this region can be neglected considering that the intensities are lower than FWHM values and, therefore, they can be considered under the threshold. Thus, we can conclude that, using 56 cm−<sup>1</sup> chirped pulses, the 2845 cm−<sup>1</sup> channel is essentially related to the lipid signal and the 2940 cm−<sup>1</sup> channel is principally due to protein content. Definitely, considering the retrieved experimental spectral resolution (about 56 cm<sup>−</sup>1) that is lower than the Raman band of lipids and proteins (about 100 cm−1) and since the overlap region, obtained when lipids and proteins are sequentially excited, can be considered negligible, our SRS microscope is suitable to image molecular specificity such as lipids (CH2) and proteins (CH3) in the C-H region, as already demonstrated in our previous paper [23].

#### **4. Conclusions**

In SRS microscopy, the trade-off between high selectivity offered by picosecond pulses and high SRS signal obtained by using femtosecond pulses is still an open question and it is widely investigated. In particular, for biological application, there is interest in label-free imaging both the lipid and protein distribution. Unfortunately, the Raman bands of these two components are very close, thus ps pulses are required; however, the protein content could be low, leading to very weak Raman signals, so fs pulses would be appropriate.

In this paper, in order to overcome the drawback of spectral focusing, an alternative method is proposed. Its basic idea is to avoid adding optical elements in the SRS microscopy optical setup and to take advantage of chirping, introduced by simply propagating the beams through dispersive materials already present in the SRS microscope. The pros are that no additional optical elements have to be introduced in the experimental setup, giving the great advantage of a more simple and cheap setup; the cons are that the spectral resolution is fixed. However, in our setup, an experimental value of 56 cm−<sup>1</sup> for spectral resolution is measured by cross-correlation techniques, and molecular specificity is demonstrated for lipids and proteins in the C-H region.

Moreover, the spectral resolution was measured before the microscope, and this means that a significant further chirping of laser pulses is expected due to the propagation of the pulsed beams inside the scan head and of the microscope objective, leading to an additional improvement in spectral resolution. Definitely, our method has the merit to maintain the benefits of femtosecond pulses, i.e., an improvement in sensitivity with respect to ps pulses, while preserving in the C-H region of Raman spectra of biomolecules an adequate spectral resolution.

**Author Contributions:** Conceptualization, methodology, investigation, validation, data analysis, writing—original draft preparation: M.A.F., R.R. and L.S. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research received no external funding.

**Data Availability Statement:** The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

**Acknowledgments:** The authors would like to thank M. Indolfi and V. Tufano (ISASI-CNR) for their precious and constant technical assistance.

**Conflicts of Interest:** The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

#### **References**


## *Review* **Recent Developments in Instrumentation of Functional Near-Infrared Spectroscopy Systems**

#### **Murad Althobaiti and Ibraheem Al-Naib \***

Biomedical Engineering Department, College of Engineering, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia; mmalthobaiti@iau.edu.sa

**\*** Correspondence: iaalnaib@iau.edu.sa

Received: 20 August 2020; Accepted: 14 September 2020; Published: 18 September 2020

**Abstract:** In the last three decades, the development and steady improvement of various optical technologies at the near-infrared region of the electromagnetic spectrum has inspired a large number of scientists around the world to design and develop functional near-infrared spectroscopy (fNIRS) systems for various medical applications. This has been driven further by the availability of new sources and detectors that support very compact and wearable system designs. In this article, we review fNIRS systems from the instrumentation point of view, discussing the associated challenges and state-of-the-art approaches. In the beginning, the fundamentals of fNIRS systems as well as light-tissue interaction at NIR are briefly introduced. After that, we present the basics of NIR systems instrumentation. Next, the recent development of continuous-wave, frequency-domain, and time-domain fNIRS systems are discussed. Finally, we provide a summary of these three modalities and an outlook into the future of fNIRS technology.

**Keywords:** NIRS technology; spectroscopy; imaging; bioinstrumentation; near-infrared

#### **1. Introduction**

The invention of near-infrared spectroscopy (NIRS) enabled many investigations and development in various scientific fields ranging from pure research laboratory experiments to robust industrial procedures for different purposes [1,2]. More interestingly, numerous biomedical studies have been carried out using NIRS systems [3–5]. Among many applications, medical diagnostics, such as functional neuroimaging, cancer diagnosis, rehabilitation, and neurology, have been a drive for numerous investigations and development [3–6]. Starting in the 1990s, a new chapter of NIRS has spawned numerous efforts to develop functional NIRS (fNIRS) systems for different applications [2,7–10]. These efforts followed naturally from the understanding that fNIRS allows functions that are not available by using other techniques. fNIR imaging (fNIRI) and spectroscopy were just some primary examples. More specifically, cerebral blood flow (CBF) and cerebral blood volume (CBV) are indirectly connected with mental activity. Neural activity increases the cerebral metabolic rate of oxygen which consumes glucose and oxygen and releases vasoactive neurotransmitters which lead to vasodilation of arterioles and finally leads to a local increase in CBF and CBV [11]. Therefore, fNIRS is considered one of the main emerging neuroimaging techniques. This allows physicians to view activity within the human brain without the need for quite complicated invasive neurosurgery. The potential of such techniques has been reviewed in several recent papers [6,12–15].

Studying light-tissue interaction at any frequency band is a quite complicated and interesting task at the same time, because the materials of the biological tissues are multilayered, multicomponent, and optically inhomogeneous. It includes reflection, refraction, absorption, and multiple scattering of photons in the tissue as shown in Figure 1. The fact that the absorption by water molecules is lower than oxyhemoglobin and deoxyhemoglobin in the light wavelength range between 650 and

1000 nm enables us to easily estimate the concentration of oxyhemoglobin and deoxyhemoglobin. Nevertheless, strong scattering of light is a characteristic feature of tissue in which near-infrared light propagates in all directions and diffusely illuminates the tissue volume instead of following a narrow path. The absorption and scattering effects at NIR will be discussed in more detail in Section 2.

**Figure 1.** Illustration of the light signal propagation via a biological tissue after it has been partially absorbed and scattered.

In 1993, a NIRS measurement was performed by Hoshi and Tamura [16] by combining five single-channel NIRS instruments. Since then, NIRS instrumentation has been continuously developing and has established its place as a functional brain imaging modality in research use. For instance, a system with 96 sources, 64 detectors, and 3072 measurement channels was recently built [17]. Depending on the area of interest, a number of emitters and detectors are used and separated by a distance of few-to-several centimeters. The acquired raw data is then processed and analyzed using a computer. The estimation of the oxygen saturation of the probed tissue can be evaluated by evaluating the ratio of red-light intensity to the near-infrared light intensity that was re-emitted from the tissue. Therefore, the fraction of oxyhemoglobin measured, for example, at a spot on the brain reflects the local activity at that spot. Furthermore, this method uses a quite low fluence of non-ionizing light radiation. Interestingly, the light wavelengths used have a penetration depth of few centimeters within tissues. Increasing the source–detector separation distance provides a better penetration depth (higher depth sensitivity profile), however, fewer photons will reach the detectors, which results in a low signal-to-noise ratio (SNR). Clearly, there is a trade-off between the light penetration depth and the SNR. A source–detector separation of 3 cm is a reasonable compromise between depth sensitivity and SNR in the brain studies of adults population [18,19] while a source–detector separation of 2 to 2.5 cm is reasonable for the brain of infants population [20,21].

fNIRS instrumentations continually improve, facilitated in part by the availability of compact semiconductor optical photodiodes at the wavelength of interest. Nowadays, commercially compact and wearable systems are also accessible for imaging and spectroscopy. Even with such advancements, fNIRS systems continue to be a subject of practical developments to make them wearable and as compact as possible. The pros and cons of fNIRS have been reviewed in a number of articles [10,22,23]. Ref. [23] specifically discusses the main features of the commercially available fNIRS systems. fNIRS technology is experimentally flexible, silent, and can be easily integrated with positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or electroencephalography (EEG). Nevertheless, fNIRS systems have two main limitations, namely a low spatial resolution of about 1 cm and its ability to get the hemodynamic response at the outer cortex only [23].

Over the last few decades, fNIRS systems have been designed by utilizing different NIRS techniques. These techniques can be categorized into three main types: (i) continuous wave (CW) by measuring the light attenuation using a constant tissue illumination; (ii) frequency-domain (FD) by utilizing the phase delay and attenuation of detected light; and (iii) time-domain (TD) by measuring the shape of short pulses after propagation through tissues. The signal acquired by these techniques is then post-processed using signal processing algorithms. Accordingly, various systems and techniques

have been the subject of many informative articles and reviews. Table 1 represents a brief list of those review papers that focused on the fNIRS systems and their applications.


**Table 1.** Most relevant review papers about CW-, FD-, and TD-fNIRS systems.

However, most of the reviews have been focusing on the differences among these three techniques and their applications rather than the instrumentation of such systems. Therefore, the main goal of this paper is to remedy this gap by examining the main differences and similarities between the CW, TD, and FD techniques in building the fNIRS system from an instrumentation point of view. The manuscript is organized as follows: First, the light-tissue interaction at NIR represented by the effects relevant to fNIRS namely the absorption and scattering, and the basics of NIR instrumentation are presented in Section 2. In Sections 3–5, we discuss CW, FD, and TD fNIRS recently developed instrumentations. Section 6 then presents a comparison across these three modalities and finally, the paper is summarized in Section 7.

#### **2. Near-Infrared Systems Instrumentation**

As for any optical system at visible or NIR light ranges, instrumentation of NIRS system consists of an (i) emitter device to illuminate a small area of tissue with light at two or more wavelengths, namely red and infrared range, and (ii) a detector device to measure the back-scattered light emerging from the tissue, and (iii) a diffraction grating to enable differentiation and recording the intensity of different wavelengths [24]. Practically, several sources and detectors (that are called optodes) are required and the collective effect is measured.

In order to choose the optimal wavelengths for sources, the effects relevant to fNIRS such as absorption and scattering need to be carefully considered. At NIR wavelengths, the atoms or molecules absorb a part of the light energy. The absorption level is determined based on the molecular composition of tissue, the wavelength of the emitted light, and the thickness of the tissue. Within the NIR window, molecules such as water and lipids are minimal absorbers compared to the iron-containing hemoglobin present within the blood. In this wavelength widow, deoxyhemoglobin (Hb) and oxyhemoglobin (HbO2) absorb light strongly. Figure 2 illustrates these absorption properties of the deoxyhemoglobin (Hb) and oxyhemoglobin (HbO2) as well as the so-called "diagnostic/therapeutic window" where water absorption is at its minimum. Thus, light in this optical window can penetrate deeper in tissue [34–36].

**Figure 2.** The light absorption of the deoxyhemoglobin (Hb) and oxyhemoglobin (HbO2) of biological tissue (re-drawn from the data taken from Ref. [37]). In the diagnostic window, the absorption level between Hb and HbO2 is notable, and water absorption is at its minimum.

Unlike the absorption, a scattering interaction occurs when light strikes a particle and changes direction. Numerous factors, such as wavelength, particle size, and refractive index of tissue, contribute to the prevalence of scattering. There are two general types of scattering: elastic and inelastic. With elastic scattering, no energy is lost; the light simply changes direction. With inelastic scattering, some energy is lost from the incident light during the interaction, which would mean altered frequency and wavelength. The scattering considered here is the former. Similar to absorption, the intensity of light measured by the detector at a distance in the medium is less than the original intensity of light incident on the tissue described by Beer-Lambert law. For the adult head, due to scattering and absorption, only about one in 10<sup>9</sup> photons that enter the tissue will actually reach the position of a detector located on the surface a few centimeters away from the source. Both absorption and scattering reduce the signal and in actual tissues, they both are present simultaneously [34].

Considering the above effects of light-tissue interaction at NIR, there are a variety of NIR sources currently exist. The incandescent light bulb has often been used in the past while light-emitting diodes (LEDs) are increasingly becoming the main sources in use due to its reliability, low power consumption and its long lifetimes [1,7,13]. At the NIR spectrum range, LEDs are available at different emission wavelengths with output power in the range of mW. Commonly available wavelengths are 660 nm, 670 nm 700 nm, 850 nm, 870 nm and 940 nm. The spectral half-width of LEDs in the 600 nm region is around 20 nm and the widths increase in longer-wavelength materials to around 40 nm for LEDs in the 900 nm region. Nevertheless, wavelength-scanned lasers and frequency combs are used whenever high precision spectroscopy is required [38]. Laser diodes have the advantages of small size, low energy consumption and high output coherent light with output power in the range of mW. At the NIR spectrum range, the GaAs/AlGaAs material (850 nm) and vertical-cavity surface-emitting laser (VCSEL) [39], which range from 750–980 nm, are commonly used.

For typical fNIRS optode separation distances, the intensity of light that penetrates the head and reaches the detector is very small–on the order of only a few mW to pW [40], which is an extremely small value. Therefore, high-sensitivity detectors are crucial in this case. Hence, the choice of detectors includes light-sensitive diodes, photomultiplier tubes (PMTs), semiconductor-based pin, and charge-coupled device (CCD) cameras [41–43] as well as avalanche photodiodes (APDs) [44]. More recently, silicon photomultipliers (SiPMs) have been intensively utilized for fNIRS applications as well [41,44,45]. SiPMs feature major advantages in terms of sensitivity, gain, and speed to acquire the signal [46]. Moreover, they provide a much higher responsivity, three or more orders of magnitude larger than PDs or APDs [41]. The choice of the photodetection devices depends mainly on the intended application and the source emitted wavelengths. For instance, the silicon-based pin is a good choice in the case of the shorter end of the NIR spectrum (400 nm to 1000 nm). In contrast, Germanium and InGaAs based pin photodiodes are suitable for the long-range of the NIR. More specifically, the wavelength of the Germanium pin photodiodes range is from 800 nm to 1600 nm while it is from 1100 nm to 1700 nm for the InGaAs ones. Nevertheless, the responsivity for the silicon-based pin peaks in the range between 800 nm and 900 nm. For silicon and Germanium Avalanche Photodiodes (APD) types, silicon APD has a higher and wider gain (20–400) with minimum dark noise (0.1–1 nA) in comparison to Germanium APD, which has a gain range (50–200) with a dark noise range from 50–500 nA. This makes silicon-based pin a common choice for many fNIRS systems [47,48].

The question that arises here is how many sources and detectors should be used and even more importantly where to locate them. In a recent paper [49], the translation of regions of interest (ROI) to the placement of optodes on a measuring cap has been thoroughly investigated as shown in Figure 3. The authors presented a toolbox in this paper to simplify selecting the right fNIRS optode positions on the scalp. It is based on the overlapping between the simulated photon transport from optodes positioned in 130 positions on the cap and the regions of interest within the brain.

**Figure 3.** fNIRS cap layout with corresponding color-coded channels. Reproduced from reference [49].

Figure 4 shows a simplistic example of a montage of one source (in orange color) and a total of eight detectors. Hence, we can have up to eight channels to be measured. In case we have more than one source, we need to consider the optical coupling between the light sources and detectors. Actually, it is one of the most important factors affecting the quality of data. The poor coupling may lead to several types of errors in the measurement. These include motion artifacts caused by contact pressure variation, sliding of the probe along the skin, and light leaks. Motion artifacts can lead to signals that are partly or wholly useless. Light leaks may lead to a signal that looks normal but has a lower physiological contrast-to-noise ratio than a signal not affected by a light leakage [50]. Hence, such systems should be checked before taking any measurements against the dark noise.

**Figure 4.** Eight channels montage with one source and eight detectors.

The main property of NIR devices is the maximum possible number of channels. Conventionally, a channel is defined as a possible path between an emitter and a detector. Therefore, the maximum possible number of channels for 8 emitters and 8 detectors system is 8 × 8 = 64 channels. Practically, it is unlikely that the detectors receive a measurable signal from distant emitters, for instance on the other side of an adult head. Hence, detectors are placed within 3–4 cm distance from the corresponding emitters. Depending on the optodes arrangement, a total number of 20–25 channels can be valid using systems with eight emitters and eight detectors [24]. One of the important parameters to consider is the movement artifact due to the mechanical instability of the optodes on the subject. Various designs of optode holders that keep the optodes securely in place and retains a stable contact and pressure against the skin have been proposed [13,25,51]. More importantly, lightweight and comfortable enough caps have been developed to allow some movement for the subject [52].

Several types of systems are currently available for fNIRS measurement techniques: continuous-wave, frequency domain, and time domain. Theoretical light penetration depths and sensitivity profiles are extremely similar for a CW system, a 200 MHz modulated FD system, and a 500 ps pulsed TD system [13]. However, the light signals from FD and TD systems can typically penetrate deeper into the brain than CW systems. Besides, it is feasible with both FD and TD systems to differentiate between the brain and extra-cerebral tissue in superficial regions. Some review papers have already compared these techniques [53,54]. Nevertheless, our paper is concerned with the recent advances of various instrumentation elements of these three systems that have been proposed in the last few years.

#### **3. Continuous Wave fNIRS Instrumentation**

The simplest form of tissue spectroscopy methods is the continuous wave technique. It is based on the steady light illumination of tissue and the detection of the transmitted light intensity through the tissue as depicted in Figure 5. In turn, it gives an idea about the relative light attenuation without differentiating the impacts of scattering and absorption. The strongest absorbers present in the blood are the hemoglobin molecules. Hence, valuable information is accessible such as relative changes in blood volume and oxygenation can be obtained. Hence, the relative concentration level can be evaluated with high reliability and contrast from the background. So, it is not a surprise to know that it is currently the most widely used fNIRS technique. The CW technique is very useful as it is very sensitive. Moreover, the sampling rate of less than a second is doable. Furthermore, CW systems can be made to be quite affordable for spectroscopy and imaging as well. In CW systems, the source emits light at the same intensity and the changes in the intensity are measured then by a detector. The light penetration depth increases as the source-detector separation increases, but the measured intensity is less, which leads to a low SNR as illustrated in Figure 5.

**Figure 5.** The source emits light at the same intensity and the changes in the intensity are measured then by the detector. The figure illustrates two detected signals successively along with the possible photon paths "banana shape" in different layers with various absorption coefficient and reduced scattering coefficient.

fNIRS systems can be miniaturized quite easily by employing commercially available light sources and detectors. However, achieving wearable fNIRS systems is quite challenging for the fact that high requirements for signal quality and system reliability are required. Most fNIRS systems employ two wavelengths where laser diodes are used as emitters and PMTs or APDs are used as detectors. Figure 6 depicts a block diagram of such a multiwavelength system [55]. Digital gain control is used to equalize all the channels over a 20 dB range. Next, a multiplexer can be used to sample one wavelength at a time. Then, to avoid aliasing at 250 samples/s, the storage capacitor is oversampled by an analog-to-digital converter (ADC) and this gives a temporal resolution of oxygenation measurement of >0.3 s. A modified Beer-Lambert principle is used here and it should be noted that the scattering is considered both homogenous and fixed [24,53,56].

**Figure 6.** The assembly of a multiwavelength, multisource, multidetector fNIRS system.

Utilizing all eight sources during data collection provides a sampling frequency of 6.25 Hz when the sources are lit sequentially (standard mode). It can be increased by reducing the number of sources used during data collection. Employing only half of the eight sources, for example, results in a sampling frequency of 10.42 Hz-an increase of more than 4 Hz. Detectors are connected to the main unit by optical fibers. Optical fibers consist of a core for transmitting light and a covering to both keep internal light from escaping and external light from entering.

A multiwavelength approach has also been considered via selecting optimum wavelengths over the complete NIR spectrum to find the concentration changes [57]. Although development is being made on devices and methods using this fNIRS multi-spectral approach, they have two drawbacks [58]. It features an increased computational complexity. Moreover, there is a need to reduce the incident power as light with a multiwavelength has higher total power than light with a limited number of wavelengths. Instead, one can sample at the two optimal wavelengths as many times as possible in order to achieve a higher SNR.

In a pioneering work that started in 2015, Von Lühmann et al. suggested a wireless fNIRS for mobile neuroergonomics and Brain-Computer Interface (BCI) Applications [59]. The system uses Time-Division Multiplexing (TDM) of the fNIRS channels. Interestingly, they aimed to have such a system as an open-source instrument. The suggested module offers four dual-wavelength fNIRS channels using 750 and 850 nm with a quite broad emission of 30 to 35 nm. The incoherence and uncollimated characteristics of these sources allow for (i) a stronger strength used for tissue examination, (ii) the optodes to be in direct contact with the scalp as a result of almost no heating of the tissue, and it is (iii) no harm for the eyes. When using TDM, various factors have to be taken into accounts such as inter-channel crosstalk, heating, and battery consumption. More importantly, the SNR is restricted by the width of the used time frames. Figure 7 shows this open-source fNIRS system.

**Figure 7.** Complete system with (**a**) single 4 channel fNIRS module and (**b**) Bluetooth module. Reproduced from reference [59].

Following that, the original system has been significantly improved and called the ninjaNIRS as shown in Figure 8 [60]. The new system has a very small footprint, scalable that supports up to 128 optodes. They are basically the core of this variable system. The optode itself digitizes the system, the signal and therefore the interface is a purely digital bus. It has a Field Programmable Gate Arrays (FPGA) onboard, at the side of the multi-wavelength led and photodetector. The long-term goal of this study has been to build a high-density fNIRS-EEG-Eye-tracking system that has a long interval to continuously monitor brain activity in real-time during movement, social interaction, and perception whilst being portable, miniaturized, lightweight and wearable.

**Figure 8.** ninjaNIRS optodes and controller. Reproduced from reference [60].

Another very interesting scheme has been proposed in 2017 by Wyser et al. to achieve a wearable and modular fNIRS system with four wavelengths [40] as shown in Figure 9. The scheme features three main characteristics: (i) the ability to measure short-separation (SS) and long-separation (LS) channels, (ii) four wavelengths can be utilized, and (iii) modular optode design that can be put on different brain regions. High modularity is obtained via a miniaturized hardware design of optode modules. It is worth mentioning here that sources and detectors can be individually connected to a central unit. For many fNIRS applications, in particular BCIs, the ability to measure the SS and LS channels can help to detect the desired signal and compensate for unwanted signals. Also, to achieve more robust estimates of concentration changes, four wavelengths are included.

**Figure 9.** A compact fNIRS instrument. (**a**) The PCB next to a 10-cent euro coin. (**b**) Picture of the fNIRS system with two optode modules. (**c**) Conceptual sketch illustrating the arrangement of the system. Reproduced from reference [40].

The following aspects have been carefully considered during the design process of this system and are summarized in Table 2.


**Table 2.** Main CW required and achieved parameters [40].

The third wearable CW-fNIRS system has been suggested by Chiarelli et al. [61]. It is based on silicon photomultiplier detectors and lock-in amplification with fiber-less and multi-channels. Due to the use of optical fibers in the fNIRS system, mechanical constraints will start being a problem to fNIRS due to the difficulty of stabilizing the optodes onto the scalp to get the required coupling. In fact, to avoid the use of optical fibers, the solution that was proposed in this study is by having direct contact between the sources and detectors from one side and the skin from the other side. The detectors on the scalp will not be allowed to be located as sensitive detectors, such as PMTs, are used. Since PMTs are delicate, bulky, and operate at high voltages, they become impractical in real-life operations. Recently, a solution was employed for fNIRS by using solid-state detectors such as single-photon avalanche diode (SPADs) that feature high sensitivity, although this will make the detector area very small, which is not a favorable solution. In fact, using photodiodes for light detection leads to low sensitivity and a dynamic range of the wearable CW-fNIRS systems.

As illustrated in the block diagram in Figure 10, the designed fNIRS system, named DigiLock consisted of three boards and an FPGA unit. All the necessary components for signal filtering and two sigma-delta converters (TI ADS1298) were used on the ADC board. The LED board implemented 32 time-multiplexed outputs for 16 dual-wavelength LEDs and an adjustable current source. The SiPM board contains an adjustable DC\DC converter for the SiPMs bias generation. Interestingly, during the multiplexing cycle, each combination of LED current source and SiPM bias could be dynamically adjusted for optimal signal acquisition. A single-board computer built around the Xilinx Zynq 7Z020 all programmable system-on-chip (SoC) was based on the FPGA board (MYIR Z-turn). Other useful peripherals were added to the board such as RAM, flash memory, USB, Ethernet, and temperature sensor. Using a tailored hardware description language (HDL) program, all the essential parts needed to execute the lock-in algorithm were implemented within the FPGA. The FPGA handled the elaboration chain implied by the lock-in algorithm and time-sharing synchronization among the LEDs and the ADCs after reading the data from the ADC converters. Employing and using the algorithm, SiPM bias, and automated calibration of each LED current were implemented.

**Figure 10.** Block diagram of the DigiLock system. Reproduced from reference [61].

The fourth system is a modular, fiberless, and features a flexible-circuit-based wearable fNIRS as shown in Figure 11a,b [62]. It is called Advanced Optical Brain Imaging (AOBI) system. In order to facilitate efficient tessellation, it was designed as a diamond-like shape to cover a surface such as the head surface. This system can be used for quite long time periods with high flexibility of coverage. A single AOBI module has one long-separation channel of 30 mm, four channels with medium-separation of 21.4 mm, and a one-short separation channel of 8 mm.

**Figure 11.** (**a**) The AOBI module (**b**) Top view of the flexible circuit board (**c**) A three-module configuration emphasizing. Reproduced from reference [62].

Figure 11c shows a configuration of three flexible AOBI modules placed over an optical phantom. It results in 54 dual-wavelength channels, 26 of which are below the 40 mm separation used in fNIRS systems. The sampling frequency of all 54 dual-wavelength channels is 33.3 Hz, while a single AOBI module samples at 100 Hz. An SNR of more than 50 dB has been achieved in all intra-module channels, while inter-module channels show an SNR of more than 40 dB up to a 52 mm SD separation.

Thus, this system has features tailored towards full-head coverage and was made following a fibreless, wearable, and modular approach. The flexible-circuit configuration enables the modules to conform and bend to help in enhancing optode-scalp coupling. Moreover, the diamond module shape is well-situated to cover head surfaces. Hence, the main present constraints of the bulky systems with fiber optics can be replaced by much smaller and lighter electrical connections. In the near future, these wearable and low-cost systems will considerably help in acquiring high-density fNIRS measurements. For the preprocessing of the collected fNIRS signals and the filtration of the different types of noise signals (instrumental noise, experimental error, and physiological noise), we refer the reader to Refs. [63–65] for more information in this regard. For advanced postprocessing, feature extractions and classification techniques, the reader is referred to Refs. [66–68].

#### **4. Frequency-Domain fNIRS Instrumentation**

Continuous Wave modality is useful to measure light intensity attenuation. However, there are mainly two limitations to the use of CW fNIRS instruments. First, the CW instruments rely on the modified Beer-Lambert principle which assumes a constant scattering degree from all sites of light. The other limitation is the assumption to estimate the light traveled distance, the differential path length (L), where this mode contains no direct information about the time of flight. Hence, it is very difficult to separate the absorption from scattering in a heterogeneous medium using CW systems. Frequency Domain (FD) instruments are the evolution of the CW NIRS instruments. Thirty years ago, FD modality has been recognized as an alternative technique to measure the absorption and scattering coefficients. In the 1990s, the work was about developing the theory and building some prototypes [26,69–73]. Nevertheless, those developments led to what is now the only available commercially FD based instrument by ISS (Table 3). Since 2000, the main works focused on the applications mainly in the area of breast and brain imaging and validating these applications in large clinical studies [74–79].

In FD fNIRS system, NIR light is modulated at a particular radio frequency (RF) usually in the range of a few hundred MHz. The selection of these frequencies is based on the distinct sizes and depths of the imaged object [76]. A high modulation frequency is suitable, for example, for imaging small breast lesions near the surface, while a low modulation frequency is suitable for imaging deeper and larger lesions as illustrated in Figure 12. Ideally, however, all modulation frequencies should be used to obtain the most accurate optical image reconstruction of the imaged object. On the other hand, acquiring NIR measurements with all modulation frequencies is unfavorable in the clinical setting to avoid patient motion. Therefore, one modulation frequency is usually selected for clinical studies based on the depth of the targeted area.

As photon propagates into deeper tissue, its phase shift measurements quantify the degree of the scattered photon in tissue, therefore, tissue scattering parameter is no longer assumed. Both the phase and the amplitude intensity of the attenuated NIRS light can be extracted from the measurements of the FD system. Figure 13 illustrates the principle of dual-phase lock detection [80]. Primarily, the system consists of two mixers, two low-pass filters, and 900 phase-shifter. The first mixer mixes the signal and the reference, where the first lowpass filter ensures the output is a DC signal, S1 = 0.5 A cos(φ). In the same manner, the 900 shifted reference signal is mixed with the original signal where the second lowpass filter ensures the output is a DC signal, S2 = 0.5 A sin(φ). From *S1* and *S2*, the amplitude-phase unit extracts both the amplitude and phase based on:

$$\mathbf{A} = 2\sqrt{\mathbf{S}\_1^2 + \mathbf{S}\_2^2} \tag{1}$$

$$\phi = \arctan \frac{S\_2}{S\_1} \tag{2}$$

If the NIR probe consists of "*N*" number wavelengths sources, "*S*" number of sources, and "*D*" number of detectors, and since both the amplitude and phase at each source-detector pair can be extracted, the resulting total number of measurements for each set of measurements has a total of *M* = 2 × (*N* × *S* × *D*). As stated earlier, when NIR light penetrates inside the human tissue, scattering of NIR light within human tissue dominates the absorption of the light propagation in such tissue. This imposes a significant challenge to FD-NIRS optical tomography with regard to its spatial resolution and localization accuracy. In fact, the spatial resolution of the optical tomography is limited by the signal-to-noise ratio in the order of 20% of the imaging depth [80].

For breast imaging applications and to overcome the spatial resolution challenge, several groups have studied the co-registration of FD-NIRS optical tomography with other high-resolution imaging modalities such as MRI, ultrasound, and mammography [81–84]. In this approach, high spatial resolution images are used to guide the optical functional imaging with high localization accuracy. One research group has investigated the co-registration of mammographic x-ray images and optical breast imaging. The functional information provided by FD-NIRS optical tomography and the anatomical information provided by mammography imaging offers information that neither mammography nor optical imaging is enough single-handedly [83].

**Figure 13.** Block diagram of a conventional dual-phase lock detection system.

Once the optical scanning is completed, the optical probe shown in Figure 14a is removed to allow the breast to be compressed for the mammography scanning. The pressure pain associated with mammography scanning is the main disadvantage of this approach. Alternatively, the MRI-guided optical imaging has been investigated to image adipose and fibroglandular breast tissue by another group [82]. In this approach, the optical probe is a circular geometry consists of six laser diodes emit light at two wavelengths 660 and 850 nm modulated at 100 MHz as presented in Figure 14c. For each source illumination, measurements are collected from 15 locations with photomultiplier tube (PMT) detectors. Unlike the mammography and optical breast imaging approach, the MRI and NIR data are acquired simultaneously. The bulk size and the high cost of MRI systems are real challenges for this technique.

**Figure 14.** Co-registration of FD-NIRS optical tomography with (**a**) mammography (Reproduced from reference [85]); (**b**) ultrasound (Reproduced from reference [84]); and (**c**) with MRI for breast imaging [82]; Copyright (2006) National Academy of Sciences, U.S.A.

The ultrasound (US)-guided optical imaging approach was also investigated [76,84]. In this approach, the co-registration of the B-scan ultrasound images are utilized to improve the localization of breast tumor, while the optical imaging provides optical absorption information of the tumor vasculature. In the flat surface optical probe geometry illustrated in Figure 14b, the probe consists of 9 source locations and 14 PMT detectors. The NIR light is emitted by four laser diodes of wavelengths 730, 785, 808, and 830 nm modulated at 140 MHz. The B-scan ultrasound probe is located at the center of the optical probe, where it is surrounded by NIR sources and detectors. The ultrasound is considered safe for the patients, its cost is relatively low, and it is a movable system. Figure 14 depicts the three different approaches aimed for breast imaging.

In the last five years, there has been renewed interest in improving the fNIRS technology itself leading to new advantages associated with FD and potentially new applications. Roblyer and his team have recently developed an ultrafast frequency-domain diffuse optics system with a deep neural network (DNN) processing method to measure the optical properties [86]. The DNN is used to replace the time-consuming Levenberg–Marquardt iterative algorithm which was adopted to fit the calibrated amplitude and phase measurements to an analytical forward model. In contrast to the iterative algorithm, the DNN is 3–5 orders of magnitude faster to estimate the optical properties of measured tissue. Therefore, the developed system combined with DNN enables a robust tissue oxygenation monitoring system that can be able to acquire, process, and display absolute concentrations of hemoglobin at an adequate rate to catch the cardiac cycle at the higher speed [86].

In an effort to maximize the penetration depth, Sassaroli et al. has shown that with combinations of sources and detectors using a dual-slope method, phase information can provide deeper sensitivity [87,88]. In this theoretical work, the authors have presented a dual-slope (gradients versus source-detectors separations) method with a requirement of at least two sources and two detectors arranged symmetrically. In comparison to the conventional single-slope method, the dual-slope method has achieved maximal depth sensitivity for all three data types in FD-NIRS (DC, AC, and phase).

In their recent work, Doulgerakis et al. has systematically studied the reconstructed image quality when phase shift measurements incorporated from an FD high-density measurement system [89]. It has been shown that phase information provides not only deeper information sensitivity but also higher effective resolutions than the CW method [89]. This could be potentially very important for fNIRS, where gaining a distance as little as 1 mm or 2 mm means one can reach deeper in the cortex. Both works, by Sassaroli et al. [87] and Doulgerakis et al. [89], showed experimentally that FD systems appear to be sensitive to deeper optical layers. There are different approaches for image reconstruction to recover the optical properties from the FD-NIRS measurements. For instance, a two-step image reconstruction approach was investigated in Refs. [90–92]. Moreover, different regularization techniques such as Tikhonov regularization and Levenberg Marquardt regularization were also studied and the reader is referred to Refs. [93–95] for more information about that.

#### **5. Time-Domain fNIRS Instrumentation**

In time-domain fNIRS systems, the tissue is irradiated with picosecond short pulses. At the detector side, detectors that feature very fast responses have to be used in order to record the amplitude of the light pulse as it leaves the tissue as shown in Figure 15. Typically, the received signal is smeared out compared with the original signal as a result of the randomly distributed lengths of photons interacting with different diffusive layers of the tissue and induce various scattering events and form the distribution of time-of-flight (DTOF) of the received photons. Hence, the absorption and scattering properties of the tissue can be assessed using the pulse peak and its time, area, and width. By integrating the temporal profiles, the intensity can be obtained.

**Figure 15.** Mechanism of TD NIRS showing the incident light short pulse and two detected signals successively along with the possible photon paths "banana shape" in different layers with various absorption coefficient and reduced scattering coefficient.

Next, the modified Beer-Lambert law can be used to evaluate the absorption variations. Moreover, the mean optical path lengths are calculated from the center of gravity of the temporal profile [96]. The computations of the mean path length and absorption variations are model-independent. Hence, the values of scattering and absorption coefficients (μ<sup>s</sup> and μa) can be obtained using the nonlinear least-squares method after applying the diffusion equation in reflectance mode into all observed temporal profiles [97]. In turn, absolute concentration levels can be obtained using this technique, which enables time-resolved measurements with any given source-detector distance that in principle can go down to zero. The intracerebral and extracerebral absorption variations are determined then from moments (integral, mean time of flight and variance) of DTOFs [98].

One of the early fNIRS-TD systems is from Hamamatsu Photonics KK, Hamamatsu, Japan [99]. It utilizes three-wavelength with a generated light pulse width of 100 ps, a peak power of 60 mW, an average power of 30 μW, and a pulse rate of 5 MHz as depicted in Figure 16. On the detection side, a PMT was used in photon-counting mode. The received signals are then processed by a TRS circuit. It consists mainly of a time-to-amplitude converter, an ADC, and a histogram memory. Optical fibers are used to illuminate the tissues and to collect the diffuse light accordingly. Two transmitters and two detectors are used and hence two spots can be illuminated at the same time.

**Figure 16.** Photograph and schematic diagram of a time-resolved spectroscopy system. Reproduced from reference [14].

There have been some other attempts from academia to build TD-fNIRS systems to assess intracerebral and extracerebral absorption changes as shown in Figure 17 [100]. Indocyanine green (ICG) bolus tracking was utilized for the clinical assessment of brain perfusion at the bedside. Time-correlated single-photon counting (TCSPC) electronics [45] scheme has been utilized for this purpose. A supercontinuum light was generated by fiber lasers with 40 MHz repetition frequency for in-vivo measurements and 80 MHz for in phantom studies. Optical fibers (length = 2 m, NA = 0.22, diameter 400 μm) with used to deliver light to the surface of the phantoms or tissue. A low power level of 20 mW was used. A power density of no more than 2 mW/mm2 was used at the surface of the skin. A fiber bundle (length 1.5 m, NA = 0.22) was utilized to transfer the photons to the detection system. For in-vivo measurements, the source-detector separation was *r* = 3 cm and for phantom studies, it was *r* = 1, 2, 3, and 4 cm. A detector module PML equipped with polychromator (NA = 0.135 and uses 77,414 diffraction grating) and 16-channel PML. Additional losses of photons in the photodetection system were due to the discrepancy between numerical apertures of the detection bundles (0.22) and polychromator (0.135). Nevertheless, lower *NA* of the using fiber bundles would cause photons losses at the tissue side. Absorption modifications were evaluated from the mean time of flight and variance of its distributions of photons and analyzing the changes in the total number of the received photons measured at 16 wavelengths from the range of 650–850 nm, which replaces earlier technique of measuring at multiple distances with different separations. Phantom, as well as in-vivo measurements, have been carried out for validation.

**Figure 17.** The setup for multiwavelength time-resolved diffuse reflectance measurements. Reprinted with permission from [100] © The Optical Society.

The results of phantom and in-vivo measurements indicated that the optical signal detected at *r* = 3 cm has a proper quality to assess blood flow in the brain cortex with high precision. The main advantage of this design that it requires a single source-detector separation. A modified algorithm based on the DTOFs acquisition for the single source-detector pair was utilized in this study. The algorithm is based on the assessment of changes of moments of the DTOF's measured at all the wavelengths.

More recently, another TD-fNIRS was built based on four-wave mixing (FWM) laser and fast-gated single-photon avalanche diode and as shown in Figure 18. The laser source was FWM laser delivering light at two wavelengths, namely 710 and 820 nm. The temporal duration of about 25 ps FWHM with a repetition rate of 40 MHz. A variable optical attenuator was used to attenuate the light beam and then a collimator was used to collimate the light into a 400 μm fiber. The sample was then illuminated with the collimated beams. The diffused light was detected using two 1 mm core fibers (*NA* of 0.37). The separation between the source and the detector was 0.5 or 3 cm. In order to evaluate the concentration of HbO2 and Hb hemoglobin in the in vivo measurements, a filter centered at 710 or 820 nm was utilized to distinguish the two wavelengths. Time-gated detectors (FG-SPAD) modules were used. Another synchronization signal was taken from the laser and split into two parts. The first part was used to feed the FG-SPAD modules in order to trigger the detector. A "stop" signal for both acquiring boards was facilitated by the second part that was fed to the time-correlated single-photon counting (TCSPC) circuit. Accordingly, the "start" signal for the TCSPC was delivered by each FG-SPAD module.

**Figure 18.** Setup schematics for the in vivo experiment. Reproduced from reference [101].

Phantoms as well as in vivo testing were used for the system characterization. Using the fast-gating technique with a small inter-fiber distance allowed a great increase of the early photons when the space between the source and the detector was reduced. This was a big advantage compared with a non-gated detector as this peak of "early photons" would cause a saturation of the dynamic range, thus decreasing the capability to discriminate a perturbation in depth. This study showed that the gating scheme can enhance the contrast-to-noise-ratio and contrast for the detection of absorption perturbation, irrespective of source-detector distance.

#### **6. CW, FD and TD Comparison and Commercial Systems**

FD systems operate by emitting light continuously from a source. That light varies as a sinusoid in intensity with frequencies on the order of megahertz. Detectors measure both the reduction in intensity and the phase shift of the light after it passes through tissue. Combining this information allows a direct measure of absorption and scattering coefficients by assuming that HbO2 and Hb are the only absorbers that contribute significantly, which eliminates the necessity to define a pathlength for the light. The two main advantages of FD systems are high temporal resolution and absolute quantification of HbO2 and Hb concentrations. Disadvantages include a relatively large amount of noise within scattering measurements as well as greater complexity and, therefore, cost more than some other NIRS systems. Unlike FD systems, TD systems emit light in short, picosecond-order bursts–or impulses– rather than continuously. These short impulses are broadened to a few nanoseconds, as well as reduced in amplitude, upon transversing biological tissue and the resultant signal is known as either the temporal point spread function (TPSF) or the distribution time-of-flight (DTOF). The broadening of the initial impulse is a consequence of the highly scattering biological tissue; not every photon will follow the same path between source and detector.

By determining the photon's time of flight, path-length can be directly calculated using the speed of light. Like FD systems, TD systems are also able to determine absorption and scattering coefficients. However, TD systems have an even greater overall cost than FD systems. They also require relatively long acquisition times to obtain a reasonable SNR and possess somewhat large dimensions with the need for physical stabilization. An advantage of TD systems over others, though, is the potential for greater spatial resolution as Torricelli and colleagues demonstrated with zero-separation measurements. Similar to FD systems, the light sources of CW systems emit light continuously, as their name implies. Depending upon the specific hardware, the emitted light intensity either has a constant amplitude or varies sinusoidally with frequencies at or below tens of kilohertz. Combining the detected signal intensities with estimates of the differential path-length factor (DPF) allows for calculation–via the MBLL–of relative hemoglobin concentration changes. Primary advantages of CW systems over others include their simplicity, smaller size, and low cost. Hence, CW systems provide the best SNR at sampling frequencies above 1 Hz as well as the potential for the highest sampling rate. However, they also have several disadvantages. CW systems cannot determine absolute quantities of HbO2 and Hb [24] and cannot distinguish between absorption and scattering. It limits the accuracy as the overall scattering coefficients of the investigated tissues is subject-dependent. Therefore, a single point CW-NIRS only provides variations of hemoglobin concentration. However, measurement of the light attenuation at a number of source/detector separations enables us to estimate the absolute μ<sup>a</sup> of the tissue by fitting the measured spatially resolved light attenuation to the solution of the diffusion equation [102]. Last, any change in optode position or amount of pressure against the scalp could significantly alter detected intensities. Finally, Table 3 gives an overview of the commercially available fNIRS systems. Most of those systems are based on the CW technique. Nevertheless, those systems differ when it comes to the number of sources and their types, detectors, and their types, number of channels, the used wavelengths, and sampling rate.



#### **7. Conclusions and Future Perspective**

In this paper, we have discussed the recently developed fNIRS systems from an instrumentation point of view. More specifically, the main features, differences, and similarities between the three different modalities (CW, FD, and TD) in building fNIRS systems have been reviewed and discussed. It is evident that the FD modality provides more information than the CW counterpart. Thus, better quantification of optical proprieties of tissue and higher depth sensitivity are possible advantages of the FD technique. However, the complexity of the FD systems and their relatively high cost are clear disadvantages. The recent renewed interest in improving FD fNIRS technology has improved the FD technique to be faster with better resolution and could provide higher sensitivity for imaging deeper tissues. This will pave the way for many potential applications. TD-NIRS systems, on the other hand, have not been as popular as CW-NIRS systems due to their complexity. Nevertheless, we have reviewed a few recent publications that reported TD systems and carried out measurements using phantoms and in-vivo of hemoglobin concentration. Interestingly, with only one channel, it is possible to estimate the optical properties of the tissue from the evaluation of the distribution of the time of flight of photons. Unlike the TD and FD systems, the compact and the simplicity of building CW systems notably allowed this modality to be commercially available for numerous applications. The current development in size and sensitivity of the semiconductor optical detectors will further allow the development of high-density fNIRS systems. With that, more channels could be used for measurements, which ultimately will enhance the quantification of optical properties of tissues.

**Author Contributions:** Conceptualization, M.A. and I.A.-N.; methodology, M.A. and I.A.-N.; investigation, M.A. and I.A.-N.; resources, M.A. and I.A.-N.; writing—original draft preparation, M.A. and I.A.-N.; writing—review and editing, M.A. and I.A.-N.; visualization, M.A. and I.A.-N.; project administration, M.A.; funding acquisition, M.A. and I.A.-N. All authors have read and agreed to the published version of the manuscript.

**Funding:** This research was funded by the Deanship of Scientific Research at Imam Abdulrahman Bin Faisal University, Saudi Arabia under project number 2019-013 Eng.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
