**In Vivo Tumor Vascular Imaging with Light Emitting Diode-Based Photoacoustic Imaging System**

#### **Marvin Xavierselvan <sup>1</sup> , Mithun Kuniyil Ajith Singh <sup>2</sup> and Srivalleesha Mallidi 1,3,\***


Received: 23 July 2020; Accepted: 8 August 2020; Published: 12 August 2020

**Abstract:** Photoacoustic (PA) imaging has shown tremendous promise for imaging tumor vasculature and its function at deeper penetration depths without the use of exogenous contrast agents. Traditional PA imaging systems employ expensive and bulky class IV lasers with low pulse repetition rate, due to which its availability for preclinical cancer research is hampered. In this study, we evaluated the capability of a Light-Emitting Diode (LED)-based PA and ultrasound (US) imaging system for monitoring heterogeneous microvasculature in tumors (up to 10 mm in depth) and quantitatively compared the PA images with gold standard histology images. We used a combination of a 7 MHz linear array US transducer and 850 nm excitation wavelength LED arrays to image blood vessels in a subcutaneous tumor model. After imaging, the tumors were sectioned and stained for endothelial cells to correlate with PA images across similar cross-sections. Analysis of 30 regions of interest in tumors from different mice showed a statistically significant R-value of 0.84 where the areas with high blood vessel density had high PA response while low blood vessel density regions had low PA response. Our results confirm that LED-based PA and US imaging can provide 2D and 3D images of tumor vasculature and the potential it has as a valuable tool for preclinical cancer research.

**Keywords:** LED; photoacoustic imaging; ultrasound; tumor imaging

#### **1. Introduction**

The critical role of vasculature for tumor growth and metastasis is undisputed. Tumor aggressiveness and the extent of the aberrant tumor vascular structure and function are highly correlated. To meet the increasing needs for oxygen and nutrients, cancers induce neovascularization, one of the hallmarks of cancer [1]. The heterogeneity in tumor vasculature [2] affects drug distribution [3,4] and, as a result, impact treatment outcomes. Indeed, many anti-angiogenic agents or therapies that can prune these blood vessels and prevent nutrients from reaching the tumors have been developed [5]. Monitoring the dynamic changes in tumor vasculature post therapy is key for prognosis [6]. Several imaging modalities have been used to study changes in tumor vasculature both at structural and functional level. More recently, photoacoustic (PA) imaging has gained tremendous popularity for imaging tumor vasculature [7–10]. PA imaging involves detection of acoustic waves generated when the optical absorbers such as hemoglobin in the blood are irradiated with nanosecond pulsed light and undergo thermo-elastic expansion and contraction [11]. The acoustic waves are picked up by the ultrasound (US) transducer and processed to generate a PA image [11,12]. Notably, two features of PA imaging that make it highly suitable to image vasculature are: 1. ability to image blood vessels

without exogenous contrast agents and 2. ability to complement US imaging that provides structural information of the tumor.

Traditionally PA imaging systems use lasers (typically Nd: YAG pumped optical parametric oscillator lasers) for exciting the optical absorbers and employ a US transducer (array or single element) to receive the generated acoustic wave. These lasers are expensive, bulky with large spatial occupancy, and often have low repetition rate which leads to low image frame rate and acquisition speed. These limitations hinder the wide adaption of PA imaging systems in both preclinical and clinical applications. Advances in semiconductor industry have led to the creation of powerful Light-Emitting Diodes (LED) which are compact, portable, affordable, and energy-efficient. AcousticX by Cyberdyne Inc. (Tsukuba, Japan) is a LED-based system that can perform PA and US imaging at high frame rates. These LEDs can be fired with a pulse repetition frequency (PRF) from 1 to 4 kHz. Despite generating low optical output (400 μJ/pulse when using two 850 nm arrays), the high PRF of LEDs offers the possibility of averaging several frames which results in generating real-time PA images with reasonable signal-to-noise ratio (SNR) [13–17]. Furthermore, the spatial resolution and SNR provided by LED-based PA imaging is very similar and comparable to laser-based PA systems [13]. Due to the inherent advantages of the LED-based PA system, several research groups have explored its ability for various potential applications [15,16,18–21]. None of these studies provide validation of the generated PA image with histology. In this study, for the first time to the best of our knowledge, we report the LED-based PA imaging system's ability to image microvasculature in subcutaneous murine tumor xenografts and validate the results using histology.

#### **2. Materials and Methods**

#### *2.1. LED-Based PA Imaging System*

An AcousticX system integrated with a 7 MHz US transducer (128 elements, −6 dB bandwidth is 80%) and two 850 nm LED arrays (for deeper light penetration) attached on either side of the transducer at a 30◦ angle was used for in vivo imaging (Figure 1A–C). The mean lateral and axial resolutions of AcousticX when using a 7 MHz transducer are 460 and 220 μm, respectively [14,19]. Each LED array had height, length and width dimensions of 12.4, 86.5, and 10.2 mm, respectively, and had 144 individual LEDs arranged in 4 rows (36 elements/row). The PRF of the LED array could be varied between 1, 2, 3, and 4 kHz. In this study, we operated the LED arrays at a PRF of 4 kHz to acquire and save PA images of the tumor vasculature at a frame rate of approximately 10 Hz i.e., 384 PA frames were continuously averaged to generate one display image. For offline MATLAB analysis, a higher number of frames (12,800) was averaged to enhance the PA image SNR. The SNR was defined as SNR = S/σ, where S is the mean of the image amplitude in a signal region, and σ is the standard deviation of the image amplitude in a noise region. The signal region was empirically chosen as a rectangular region (4 by 4 pixels) near a feeding blood vessel on the tumor surface where the PA signal was apparent; the noise region was manually chosen to correspond to a rectangular region outside the tumor region.

**Figure 1.** (**A**) Schematic of our experimental setup in which the Light-Emitting Diode (LED)-based photoacoustic (PA) system was utilized for imaging subcutaneous tumors in a mouse. (**B**,**C**) LED array used for illumination and the angle at which the LEDs are connected to the ultrasound transducer. (Scale bar = 2 mm).

#### *2.2. Animal Model*

All animal experiments were performed in compliance with the Institutional Animal Care and Use Committee (IACUC) of Massachusetts General Hospital (MGH). Swiss nu/nu female mice (aged 6–8 weeks, body weight 20–25 g) were raised in aseptic conditions in the institution's animal facility with a 12 h light and dark cycles. We used human hypopharyngeal squamous cell carcinoma (FaDu) xenograft model because these are highly vascularized tumors [22,23]. FaDu cells were cultured in Eagle's Minimum Essential Medium supplemented with 10% fetal bovine serum and 1% antibiotics (Penicillin and Streptomycin 1:1 *v*/*v*) in a T75 flask and maintained at 37 °C and 5% CO2. At 80–90% confluency, cells were trypsinized and centrifuged to remove the supernatant. The cells were then suspended in a mixture of matrigel (BD Bioscience, CA) and phosphate buffered saline in 1:1 *v*/*v* at a density of 1 million cells per 100 μL and injected subcutaneously into mice.

#### *2.3. In-Vivo Photoacoustic Imaging*

A total of 3 mice were imaged using the AcousticX system when the tumors reached 6–8 mm in diameter as measured with a digital caliper (VWR 62379-531, Radnor, PA, USA). The mice were anesthetized using 2% isoflurane and placed in a warm water bath to achieve optical coupling with the US transducer as shown in Figure 1A. The LED arrays were positioned at a 30◦ angle (Figure 1B) to the US transducer to colocalize the light with the transducer focus. The US transducer was then vertically positioned such that the tumor was approximately at the focal region of the transducer (~20 mm depth). Radio frequency (RF) data and US/PA images for several cross-sections of the tumors were acquired and saved for further analysis. Motion artifacts in the tumor region due to mouse breathing were minimal as shown in Video S1 (captured at 10 Hz frame rate, see Supplementary materials). In the post-acquisition image analysis, frames with breathing motion were not included. The RF data was processed with previously reported algorithms [24–27] using custom written MATLAB code to obtain US/PA images of tumor vasculature. Three-dimensional imaging was performed by moving the US/PA transducer linearly across the XY plane with the aid of an in-built translation stage and the sequence was rendered in ImageJ.

#### *2.4. Histology*

Post imaging mice were sacrificed, and tumors were surgically removed without skin. The tumors were embedded and frozen in optical cutting temperature (OCT) compound in the same orientation as the US/PA image with the aid of fiducial markers placed prior to euthanasia. The frozen tumors were cryosectioned (5 μm) in the same plane as the US/PA images based on previously published methodology [28] and stained for platelet endothelial cell adhesion molecule-1 (or CD31). Tissue sections were incubated with anti-CD31 primary antibody (1:50 dilution; Rabbit polyclonal anti-CD31, Abcam) overnight at 4 °C, washed, and incubated with fluorescently tagged secondary antibody (1:100 dilution, Donkey Anti-Rabbit IgG NL637 Affinity Purified Ab, R&D systems) for one hour. The sections were mounted using slowfade gold antifade mountant with 4 ,6-diamidino-2-phenylindole (DAPI, Invitrogen, Carlsbad, CA, USA) and sealed with coverslips. A whole-slide scanning fluorescence imaging system (Hamamatsu NanoZoomer 2.0-HT) was used to image the slides at 40 x magnification and images were saved in Hamamatsu Nanozoomer Digital Pathology Image (NDPI) format.

#### *2.5. Image Processing*

A 5x magnification of the fluorescence image was extracted from the NDPI file using ImageJ, rotated to match the PA image orientation and saved for further analysis in MATLAB (Figure 2). Several 1.25 <sup>×</sup> 1.85 mm<sup>2</sup> regions of interest (ROI) were chosen from the immunofluorescence (IF) and PA images for correlation analysis. This corresponds to 679 × 1005 and 5 × 50 pixels in IF and PA images, respectively. The background from a similar ROI area was calculated from a region outside the tumor area for both IF and PA images and subtracted from the tumor ROI values. The CD31 intensity from the ROI in IF images were summed and compared to the total PA signal intensity from the corresponding ROI in the PA image. Linear regression analysis was performed using GraphPad Prism software.

**Figure 2.** Flowchart describing the image processing methodology to correlate CD31 signal intensity with PA signals. (Scale bar = 2 mm).

#### **3. Results**

#### *3.1. SNR Enhances with Frame Averaging*

The PA image of a representative tumor acquired and processed at different frame rates (0.31–60 Hz) is shown in Figure 3A. To enhance the SNR and improve image quality, subsequent PA frames were averaged resulting in lower frame rate. This relationship between the frame rate and SNR of the PA image is displayed in Figure 3B, where low frame rate offers better SNR as expected. The low SNR of

the image at higher frame rates may be attributed to the low output power of the LED, however, due to the high pulse repetition rate of LED excitation for PA data acquisition, more PA frames can be acquired and averaged to increase the SNR (Figure 3C). A frame rate of 10 Hz was chosen for data acquisition (2D and 3D acquisition) during the experiment as it provided the best possible SNR for near real-time combined US/PA imaging. To quantitatively compare the 2D images with IF images, the acquired PA images were further averaged to a final frame rate of 0.31 Hz to enhance the SNR by 20 dB, i.e., overall 12,800 frames were averaged to obtain an SNR of ~40 dB (Figure 3C). The enhanced SNR facilitated accurate comparison with histology images without the involvement of background noise.

**Figure 3.** The signal-to-noise ratio (SNR) is enhanced by averaging several frames in the LED-based PA imaging system. (**A**) PA images of a tumor acquired and processed at different frame rates. (**B**) Relationship between the frame rate and SNR. (**C**) Plot detailing the SNR change with respect to the averaging of frames. (Scale bar = 2 mm).

#### *3.2. Tumor Vascular Imaging with AcousticX*

Figure 4A shows the overlay of the US image (in grayscale) with the PA image of two representative tumors (offline reconstructed). The B-mode US image depicts the shape and structure of the tumor (~6 to 10 mm in diameter and depth), and the PA image provides the blood vessels location inside and around the tumor. The PA image is displayed on a pseudo-colored hot map where low and high signals are represented by black and white, respectively. Subcutaneous tumors are fed by large blood vessels usually originating from the skin tissue above or the muscle tissue lying beneath the tumor, as visualized in Figure 4A. AcousticX can detect the PA signal not only from the surface of the tumor but also from the deep edge (~10 mm) of the tumor. The Hematoxylin and Eosin (H&E) stain is a standard histology stain to showcase tissue microanatomy. Figure 4B showcases the H&E image of the tumor at the same cross-section as the US/PA image where the tumor structure is analogous to the shape and size observed in the US image. Figure 4C shows the IF image of the tumor cross-section with CD31 stain for blood vessels (yellow) and DAPI for cell nuclei (blue). DAPI staining was performed to visualize the tumor structure and aided in selecting corresponding ROIs as the US/PA image. Qualitatively we noticed regions of the high PA signal have high vessel density on the IF images as displayed by the yellow arrows in Figure 4. The blood vessels on the top of the tumor (particularly Figure 4A top panel) are not strongly visualized in the IF image as these vessels were part of the skin above the tumors which was removed during the histological processing of the tissue.

**Figure 4.** Evaluation of AcousticX on subcutaneous tumor xenografts in mice. (**A**) Ultrasound and photoacoustic image overlay of the tumor vasculature of two mice. Images of the Mouse 1 tumor acquired at a frame rate of 10 Hz are shown in Video 1. Motion due to breathing was minimal and only frames that did not have motion artifacts were used for image analysis. (**B**) H&E stain of the tumor cross-section. (**C**) Immunofluorescence stain for blood vessels in yellow and cell nuclei in blue. (Scale bar on all images = 2 mm, Video S1, MP4, 1.7 MB).

#### *3.3. Correlation of PA Signal with Histology*

The correlation plot between the PA signal from blood vessels and CD31 intensity from the IF image is displayed in Figure 5. As expected, we observed that larger blood vessels or areas with high blood vessel density had a high PA signal while the micro vessels or low blood vessel density regions had a low signal. The overall R-value for correlation was 0.84 and individual R-values for the three tumors used in the study were 0.65, 0.91 and 0.91. The differences in the R-values among the tumors were due to the inter-tumor heterogeneity in the vasculature and biological variations across different animals. The results obtained here with the LED-based PA imaging system are on par with reports using traditional laser-based PA imaging systems. For example, Mallidi et al. [28] and Bar-Zion et al. [29] qualitatively demonstrated good spatial co-registration between tumor vascular oxygenation maps obtained from PA imaging and hypoxia markers such as pimonidazole or carbonic anhydrase IX. Gerling et al. [30] observed an excellent R-value of 0.95 when quantitatively correlating oxygenation saturation measurements from PA imaging to the intensity of pimonidazole in the tumor, however, there were only six points in this correlation. Furthermore, the PA signals were averaged over the entire tumor underplaying the intra-tumor heterogeneity. In our study we divided the tumor region into several ROIs to consider both high and low vascular regions. In addition, these studies [28–30] utilized higher frequency transducers with limitations in imaging depth. It is obvious that some microvascular structures inside the tumors may be not within the detection limits of the 7 MHz transducer used in our study, which would have resulted in slightly lower correlation between PA signals and CD31 intensity in certain regions of the tumor. Nevertheless, we clearly demonstrate a good correlation between histology and PA images generated using an LED source with comparatively less optical output energy.

**Figure 5.** Correlation between background subtracted signal intensity from PA images and CD31 immunofluorescence (IF) image. Each color represents data from different mice. A total of 30 ROIs from three mice were used in our analysis. The R-value for correlation based on Pearson's coefficient was calculated between the CD31 intensity from the IF image and PA signals from blood vessels.

#### *3.4. 3D Tumor Imaging with AcousticX*

An LED-based photoacoustic imaging system can also be used to visualize vasculature in 3D. Figure 6 shows the overlay of a US/PA image of tumor acquired by AcousticX at a frame rate of 10 Hz and reconstructed in 3D using ImageJ. A corresponding video showcasing the 3D rendering of US/PA images of the tumor is shown in Video S2. Figure 6C shows the structure of the tumor in different cross-sectional planes. The measurements obtained from AcousticX images have good agreement with the caliper measurements for the tumor size (length, width, and depth). Two LED arrays on either side

of the US transducer delivered a total optical energy output of 400 μJ/pulse on the skin surface. In the absence of optical scattering, the irradiation area at the US focus will be approximately 50 × 7 mm in the YX-plane, with a maximum radiant exposure of 0.11 mJ/cm2, which is orders of magnitude lower than the ANSI safety limit and the radiant exposure in laser-based PA imaging systems. It is encouraging that the system used in this study could visualize tumor vasculature in vivo at an imaging depth of 10 mm, considering the low pulse energies provided by the LED arrays.

**Figure 6.** (**A**) Overlay of ultrasound and photoacoustic images of tumors reconstructed in 3D (Video 2) and (**B**) in orthoslice view. (**C**) Tumor images displayed in different 2D planes. (Scale bar = 2 mm, Video S2, MOV, 779 kB).

#### **4. Discussion**

In preclinical cancer research, rodent models are widely used to study the complex physiological process involved in tumor formation and therapy response. The tumor is grown either subcutaneously or orthotopically at the site of the origin of tumor cells. The maximum size for subcutaneous tumors is less than 1–2 cm in diameter, and orthotopic tumors depending on the organ can vary between 3 and 6 mm in diameter. The length of the LED array in AcousticX is 8.65 cm [14] and is significantly larger than that of the tumors generally seen in preclinical research. Hence most of the light from the LED arrays is delivered outside of the tumor. Considering this, the imaging depth of ~10 mm we achieved demonstrated the high sensitivity of the system, which is commendable. Better light delivery strategies to illuminate tumors efficiently may help in improving the imaging depth and SNR further.

Currently the AcousticX system is integrated with a 7 MHz ultrasound transducer that has limitations in spatial resolution. Higher frequency transducers offer better resolution and our future work involves integrating these transducers with LED arrays to enhance image resolution. In addition to improvements in spatial resolution, image SNR can be enhanced through novel beamforming algorithms. Traditional beamforming algorithms are based on "delay and sum" and "delay-multiply and sum" methodology. Although they are simple to implement in imaging systems, they generate low quality images with low contrast, especially when the data is noisy. Recent studies have shown that the "double stage delay-multiply and sum" algorithm improves lateral resolution and offers high contrast for an LED-based PA imaging system [31,32]. Moreover, in this study we used a single wavelength

(850 nm) LED array that provides deeper light penetration to image tumor vasculature. The availability of high-power multi-wavelength LED arrays will enable imaging blood oxygen saturation levels in the tumor vasculature [33,34]. This information is key for monitoring treatment response and predicting recurrence. These multi-wavelength LED arrays can also enable the imaging of tumor via receptor targeted exogenous contrast agents [15,18,25,27], as previously demonstrated by us and others with traditional pulsed laser sources [9,35–41]. With the availability of low-cost multi-wavelength PA imaging systems, our future studies will aim at developing and monitoring various targeted therapies guided by multi-wavelength PA imaging that can provide vascular information with information on the localization of targeted contrast agents.

#### **5. Conclusions**

In summary, we demonstrate the capability of an LED-based PA imaging system for monitoring tumor vasculature in vivo. The PA images obtained are in good correlation with the gold standard IF images. With the widespread rise of PA imaging technology, new improved reconstruction algorithms, high frequency transducers, better illumination strategies, and multi-wavelength LED arrays, we can expect LED-based PA imaging to become a promising tool and play a highly significant role in both preclinical research and clinical applications.

**Supplementary Materials:** The following are available online at http://www.mdpi.com/1424-8220/20/16/4503/s1, Video S1: Visualization demonstrates that the motion artifacts in the tumor region caused due to mouse breathing, while acquiring the RF data for Figure 4 in the manuscript, is minimal. Video S2: Visualization demonstrating the 3D rendering of US/PA images of the tumor acquired by AcousticX.

**Author Contributions:** Conceptualization, M.X., M.K.A.S., and S.M.; methodology, M.X. and S.M.; software, M.X. and M.K.A.S.; validation, M.X. and S.M.; formal analysis, M.X.; investigation, M.X. and S.M.; resources, S.M.; data curation, M.X. and M.K.A.S.; writing—original draft preparation, M.X. and S.M.; writing—review and editing, M.X., M.K.A.S., and S.M.; visualization, M.X. and S.M.; supervision, S.M.; project administration, S.M.; funding acquisition, S.M. All authors have read and agreed to the published version of the manuscript.

**Funding:** The work was supported by funds from the Tufts School of Engineering and NIH R41CA221420.

**Acknowledgments:** The authors would like to thank the Pathology core at the Wellman Center for Photomedicine, MGH for sectioning the tissues and access to the NanoZoomer imaging system. We also would like to thank Annika Schaad at Tufts University for help with the Figure 1 schematic drawing.

**Conflicts of Interest:** M.K.A.S. is employed by Cyberdyne INC. The authors have no relevant financial interests or potential conflicts of interest to disclose.

#### **References**


© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## *Article* **Oxygen Saturation Imaging Using LED-Based Photoacoustic System**

**Rianne Bulsink 1,†, Mithun Kuniyil Ajith Singh 2,† , Marvin Xavierselvan <sup>3</sup> , Srivalleesha Mallidi 3,4 , Wiendelt Steenbergen <sup>1</sup> and Kalloor Joseph Francis 1,\***


**Abstract:** Oxygen saturation imaging has potential in several preclinical and clinical applications. Dual-wavelength LED array-based photoacoustic oxygen saturation imaging can be an affordable solution in this case. For the translation of this technology, there is a need to improve its accuracy and validate it against ground truth methods. We propose a fluence compensated oxygen saturation imaging method, utilizing structural information from the ultrasound image, and prior knowledge of the optical properties of the tissue with a Monte-Carlo based light propagation model for the dual-wavelength LED array configuration. We then validate the proposed method with oximeter measurements in tissue-mimicking phantoms. Further, we demonstrate in vivo imaging on small animal and a human subject. We conclude that the proposed oxygen saturation imaging can be used to image tissue at a depth of 6–8 mm in both preclinical and clinical applications.

**Keywords:** oxygen saturation imaging; LED; photoacoustics; ultrasound; fluence compensation; in vivo; hypoxia

#### **1. Introduction**

Blood oxygen saturation (sO2) is the ratio of oxygen saturated hemoglobin to the total hemoglobin concentration [1]. sO2 is a key physiological marker for detection, diagnosis, and treatment of several devastating diseases like cancer [1,2]. Hypoxia, reduction in sO2 is one of the early biomarkers for all tumors and inflammatory diseases [3,4]. A fastgrowing tumor is expected to have abnormal vasculature growth (angiogenesis) in and around it, resulting in hypoxia [5]. Detecting hypoxia at an early stage has a profound impact in early diagnosis and better treatment planning. In addition, sO2 deviations inside the brain and its detection is important in the diagnosis of cerebrovascular diseases [6,7]. Hence, sensitive detection of changes in sO2 is useful in early diagnosis and can help plan better treatment strategies.

In both preclinical and clinical settings, it is required to measure the changes in sO2 with high spatial and temporal resolution without compromising on the imaging depth. Currently, point sO2 measurement using commercial pulse oximeter is widely used to obtain an average sO2 value. However, the spatial distribution of sO2 is important in several applications, including tumor analysis. Non-invasive functional Magnetic Resonance Imaging (fMRI) is capable of imaging sO2 changes with spatial resolutions ranging from 1–6 mm [8,9]. However, fMRI systems are too expensive and are not suitable in a point-ofcare resource-limited setting. Another imaging modality that is suitable for sO2 imaging is

**Citation:** Bulsink, R.; Kuniyil Ajith Singh, M.; Xavierselvan, M.; Mallidi, S.; Steenbergen, W.; Francis, K.J. Oxygen Saturation Imaging Using LED-Based Photoacoustic System. *Sensors* **2021**, *21*, 283. https://doi.org/10.3390/s21010283

Received: 15 December 2020 Accepted: 1 January 2021 Published: 4 January 2021

**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

**Copyright:** © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).

positron emission tomography (PET). The PET involves ionizing radiations and can offer far lower spatial resolution when compared to other techniques [10]. Purely optical imaging techniques can detect changes in sO2 at reasonable imaging depth. However, the spatial resolution offered by optical-techniques is not sufficient for most clinical applications [11]. Photoacoustic imaging (PAI) has shown potential in multiple preclinical and clinical applications [12,13]. PAI or optoacoustic imaging is a modality that combines the advantages of ultrasound (US) and optical imaging techniques [12]. In PAI, nanosecond pulsed light excitation of optical absorbers in the tissue results in US signal generation [12]. These US signals can be detected using US transducers. Further, the signal can be used to reconstruct an image proportional to the optical absorption of the tissue. PAI is one of the fastest-growing imaging modalities of recent times, offering high-resolution images and optical contrast in deep tissue. Since acoustic detection hardware can be shared between US and PAI, it is straightforward to implement a dual-mode US and PAI system capable of offering structural, functional, and molecular contrast from a single measurement [14]. Hemoglobin is an excellent optical absorber with well-defined absorption spectra in the near-infrared wavelengths. Hence, measurement of sO2 is undoubtedly the most interesting application of PAI [15,16]. By carefully selecting the light wavelengths (NIR range is commonly used because of its tissue penetration and high absorption in hemoglobin), one can measure deep-tissue sO2 with unprecedented spatio-temporal resolution using PAI [17]. Conventionally, PAI utilizes bulky, slow, and expensive class IV lasers for tissue illumination [18], a key factors hindering the clinical translation of this imaging modality [19]. In recent years, there have been advancements in solid-state device technology leading to the development of laser diodes (LD) and light-emitting diodes (LED), which are suitable for PAI [20–22]. High power LEDs have shown potential in PAI-based superficial and sub-surface (skin and <10 mm imaging depth) imaging in both preclinical and clinical applications [23–28]. LED-PAI is portable, affordable, and energy-efficient, for that reason it may have an easy clinical acceptance. These advantages of LED-PAI will be also useful in a preclinical setting, where it is of great importance to improve small animal study outcome by monitoring disease and treatment progress [29,30]. Further, we have reported an affordable LED-based tomographic system by imaging an object from multiple angles and demonstrated it in finger joint and small animal imaging [30,31]. In this work, we propose the use of US image information for fluence compensation to improve the accuracy of sO2 in dual-wavelength LED-based handheld PAI.

Fluence variations in the tissue hinder the possibility of quantitative sO2 imaging [16]. It is important to develop fluence-compensation methods that are accurate, fast, and easy-toimplement in real-time PAI systems. Guo et al. utilized the acoustic spectra of PA signals to compute sO2 in the tissue [32]. Deep learning-based methods were also used for sO2 [33,34]. Xia et al. utilized the dynamics in sO2 to correct for the fluence. Tzoumas et al. modeled the fundamental optical absorption spectra (eigenspectra) in the tissue, with the consideration of unknown optical fluence and demonstrated improvement in sO2 imaging [35]. Hussain et al. used acoustically tagged photons (acousto-optics) for fluence compensation in sO2 imaging [36]. All the above-mentioned works were focused on improving the quantitative nature of sO2 imaging in laser-based PAI, with either algorithmic or optical methods. As most PAI systems can also perform US imaging, using US image information and the prior knowledge of tissue for fluence compensation can be a potential direction for quantitative sO2 imaging in practical applications. US information was previously used for fluence estimation using Beer's law in tissue [37]. Even though sO2 imaging using LEDbased PAI system was previously reported by the authors [29,38] and by Zhu et al. [25], depth-dependent fluence variation was not considered in these works. Assumption of homogeneous light distribution or simple fitting using Beer's law is not suitable in handheld LED-PAI probes, with a gap between tissue and US probe surface caused by LED arrays placed at an angle to maximize light in the imaging plane.

In this work, we utilized the information offered by conventional US imaging to segment the tissue and used Monte-Carlo simulations of the LED probe to estimate the fluence map in the imaging plane. For the first time, we characterized the sO2 imaging capability of commercially available LED-based PAI system using human blood in vitro and compared the results with oximeter readings. Further, we thoroughly tested and validated our proposed US-assisted fluence compensation method and its efficacy in improving LED-PAI based sO2 imaging using tissue-mimicking phantoms with human blood at different depths and mice in vivo. Finally, we applied our proposed method in a real-time experiment showing clear differentiation between a vein and pulsating artery in a human wrist.

#### **2. Materials and Methods**

In this work, we used a US and LED-based PA imaging system, AcousticX (Cyberdyne Inc., Tsukuba, Japan). For oxygen saturation imaging, we used a dual-wavelength approach with an LED array having 750 nm and 850 nm elements. Figure 1e shows the probe with LED array with elements having two wavelengths. Two LED units were used in the probe on either side of the transducer (Figure 1e). The 850 nm array has pulse energy of 200 μJ and 750 nm array has 100 μJ, with a pulse duration of 70 ns. The LED array unit consists of two rows of 36 element 850 nm arrays and two rows of 24 element 750 nm arrays arranged alternately. A 128 element, 7 MHz linear US transducer with a bandwidth of 80% was used in the probe (Figure 1e). In this section, we present the proposed fluence compensation method and the details of our experimental studies.

**Figure 1.** Algorithm and experimental setup. (**a**) Fluence compensation algorithm using ultrasound information. Schematic of the imaging plane in the photoacoustic oxygen saturation (PA sO2) characterization experiments (**b**–**d**). (**b**) PA sO2 imaging of tube 1 (T1) having constant sO2 against T2 with varying sO2. (**c**) PA sO2 imaging in soft tissue-mimicking medium with tube T3 imaged at four different depths. (**d**) PA sO2 imaging in medium with two optical properties, water on the top and soft tissue-mimicking medium at the bottom. (**e**) LED-based photoacoustic probe with an ultrasound transducer and LED arrays of two wavelengths (750 and 850 nm). (**f**) In vivo imaging of a mouse with cyclically changing oxygen concentration in the breathing air. (**g**) In vivo imaging of an artery and a vein in the wrist of a human volunteer.

#### *2.1. Oxygen Saturation Imaging Using Linear Unmixing*

An overview of PA oxygen saturation using linear unmixing with fluence compensation is presented in this section for completeness [17,39]. PA initial pressure resulting from pulsed light excitation of optical absorber with the assumption of stress confinement can be expressed as,

$$p\_0(\mathbf{r}, \lambda\_i) = \Gamma \mu\_a(\mathbf{r}, \lambda\_i) \Phi(\mathbf{r}, \lambda\_i). \tag{1}$$

Grüneisen parameter Γ is the thermal to pressure conversion efficiency and *μ<sup>a</sup>* is optical absorption coefficient and Φ is light fluence (integrated radiance). The optical absorption coefficient can be written as the product of the molar extinction coefficient () and concentration (*c*). Thus the reconstructed PA image can be written as,

$$p(\mathbf{r}, \lambda\_i) = \Phi(\mathbf{r}, \lambda\_i) \left[ \epsilon\_{HbR}(\lambda\_i) c\_{HbR}(\mathbf{r}) + \epsilon\_{HbO\_2}(\lambda\_i) c\_{HbO\_2}(\mathbf{r}) \right] \tag{2}$$

Let the two wavelengths used in this study be *λ*<sup>1</sup> = 750 nm and *λ*<sup>2</sup> = 850 nm. Further, let *b*(*r*, *λ<sup>i</sup>* = *p*(*r*, *λi*)/Φ(*r*, *λi*) be the fluence compensated PA images given by

$$b = \begin{bmatrix} b(r, \lambda\_1) \\ b(r, \lambda\_2) \end{bmatrix}. \tag{3}$$

The molar extinction coefficient of HbR and HbO2 for both wavelengths can be combined into a matrix,

$$A = \begin{bmatrix} \mathfrak{e}\_{Hb\mathbb{R}}(\lambda\_1) & \mathfrak{e}\_{Hb\mathbb{O}\_2}(\lambda\_1) \\ \mathfrak{e}\_{Hb\mathbb{R}}(\lambda\_2) & \mathfrak{e}\_{Hb\mathbb{O}\_2}(\lambda\_2) \end{bmatrix}. \tag{4}$$

The molar concentration of HbR and HbO2,

$$\mathbf{x} = \begin{bmatrix} \mathbf{c}\_{HbR}(\mathbf{r})\\ \mathbf{c}\_{HbO\_2}(\mathbf{r}) \end{bmatrix} \text{\textquotedbl{}}\tag{5}$$

can be retrieved by solving the linear equation *Ax* = *b*. The most common method to solve this linear equation is using the least square solution given by,

$$\mathbf{x} = (A^T A)^{-1} A^T b. \tag{6}$$

Oxygen saturation image can then be obtained from the molar concentrations,

$$\text{sO}\_2(r) = \frac{c\_{HbO\_2}(r)}{c\_{HbO\_2}(r) + c\_{HbR}(r)} \times 100\% \tag{7}$$

#### *2.2. Fluence Compensation*

Using LED arrays in the PA probe introduces some constraints. First, it is near impossible to have the transducer touching the tissue surface due to the inclined position of the LED array. Thus, a medium such as water or a gel pad is required to fill this gap for acoustic coupling. Second, with multiple LED element arrays in the unit, the light source has several mm width and cannot be assumed as a line source with both wavelengths of light originating from the same location. Further, with the large opening angle and low power of LEDs, it is not possible to assume a homogeneous illumination in the imaging plane. In this section, we present a method to estimate the fluence map inside the tissue utilizing the information from the US image. The following steps were performed to obtain a fluence compensated sO2 image (Figure 1a).


5. Linear unmixing was used to obtain oxygen saturation images from the fluence normalized PA images.

Characterization of the proposed approach was performed in three sets of experiments. In the first experiment, a validation of oxygen saturation imaging using the linear unmixing algorithm using the two-wavelength LED array was performed. Next, the effect of fluence compensation on oxygen saturation imaging was studied by imaging tubes carrying blood at four different depths in a soft tissue-mimicking medium. Finally, the US-assisted fluence compensation was tested on a two slab phantom and in vivo imaging.

#### 2.2.1. Ultrasound Segmentation

The tissue boundary was obtained by segmenting the US image. First, a median filter was applied to smoothen the speckles in the US image. Next, a binary image was obtained by thresholding the grayscale US image. The threshold value was manually selected for different tissue types. The binary image was then processed to obtain the tissue boundary using the Sobel edge detection operator. Further, the speckle created holes were coved using morphological operator, filling. The largest connected component was identified and used as the mask for the target tissue area. All image processing operations were performed using MATLAB (MathWorks, MA, USA) imaging processing toolbox. For the liquid phantom used in the experiment, the US information about the separating film was obtained and the water region was set to zeros.

#### 2.2.2. Light Propagation Model

A binary mask from the US segmentation and prior optical properties of the tissue was used as input to the model. We used a Monte Carlo based light propagation model (MCXLAB) for the simulations [41]. The LED-based PA probe consists of a US transducer and LED units with two-wavelength arrays. The positions of LED elements in the array, its opening angle, and pulse energy were used to define the light source. Measured optical properties of the phantom and values from the literature for the in vivo experiments at the two wavelengths were used as medium properties. The imaging plane was defined as the center slice in a three-dimensional grid and the fluence map was retrieved after the simulation from the same location.

A three dimensional geometry 55 mm (x-axis) × 55 mm (y-axis) × 37.9 mm (z-axis) (744 × 744 × 512 pixels) with uniform grid size of 74 μm was used for the simulation. The acquired images (PA and US) had a dimension of 40.32 mm (x) ×37.9 mm (z) resulting from a transducer with 128 elements with a pitch of 0.315 mm and 1024 time samples at 40 MHz sampling rate. The PA and US images were then interpolated to 545 × 512 pixel images with a uniform spacing of 74 μm in both dimensions. The input binary mask to the model was also the same size (545 × 512). The LED units have a length of 50 mm and a width of 10 mm. The LED units were placed at an angle of 41.4◦ with respect to the transducer surface and 0.15 mm in front of it. The LED units were placed on either side of the transducer with a distance of 9.56 mm between them. Four LED element arrays, two arrays from two wavelengths (750 nm and 850 nm) are present in each LED unit. The 750 nm arrays have 24 elements and the 850 nm arrays have 36 elements. The elements were placed with a spacing of 1.4 mm within the array and the spacing between the arrays was 1.72 mm. An opening angle of 120 degree for each element was used in the model. The two wavelengths LED elements have a power ratio of 2:1 for 850 nm and 750 nm, respectively. This power ratio was incorporated in defining the light source.

The length of the input US mask (545 pixels) was smaller than the length of the LED array (744 pixels) along the x-axis. Hence, the first and the last pixels were replicated to fill the entire length of the grid, with the US mask at the center. Further, the mask is two dimensional, and to define the medium properties in the third dimension, 744 copies of the same mask were stacked along the y-axis. Two phantom experiments and two in vivo experiments were performed in this study (Figure 1). More details are provided in Sections 2.3. The optical properties used in the simulation are provided in Table 1 below. The tubes used in the experiments were assumed to be transparent in the wavelengths used and hence not considered in the light propagation model.

**Table 1.** Optical properties used in the simulations, optical absorption coefficient (*μa*) and reduced scattering coefficient (*μ*- *<sup>s</sup>*) (anisotropy, g = 0.9).


*2.3. In Vitro Characterization in Phantoms*

2.3.1. In Vitro Validation of Oxygen Saturation Imaging

The goal of the experiment is to check the accuracy of two-wavelength PA sO2 imaging against the ground truth method, oximeter sO2. Fluence compensation was not considered in this case. However, the pulse energy of both wavelengths were used to normalize the PA images. Figure 1b shows imaging plane, where two tubes were placed in a water tank at 15 mm from the transducer surface. Both the tubes were placed at the same depth with a small distance between them so that the fluence variations are minimal. Polythene tubes (Smiths Medical, USA) with an inner diameter of 0.5 mm and an outer diameter of 0.84 mm filled with human blood were used for imaging. In the first tube (T1), blood with a constant oxygen saturation of around 65% was maintained. In the second tube (T2), different levels of blood oxygen were introduced. In this way, we also aim to see the relative difference in PA estimated sO2. The blood (9 mL with heparin as an anticoagulant) for the experiment was obtained by TechMed Centre donor services, University of Twente, from a human volunteer after completing an ethical approval, adhering to the Dutch Medical Research involving human subjects Act (WMO). One part (4 mL) of the blood was used for tube T1. To obtain deoxygenated blood, 17 mg of sodium hydrosulfite (Sigma Aldrich, Darmstadt, Germany) was added to 5 mL of blood. The deoxygenated blood was then exposed to atmospheric oxygen to obtain different levels of oxygen concentration. Blood oxygen saturation was measured in both tubes before and after the imaging using an oximeter (AVOXimeter 4000, ITC, Munich, Germany). PA images at both 750 nm and 850 nm were performed by toggling between two-wavelength arrays. PA sO2 images were obtained using the linear unmixing method mentioned above. The oxygen saturation in the tube was estimated by averaging 20 PA frames, with each frame generated by averaging 64 frames on-board in the system. A region of interest of 0.6 × 0.6 mm was selected for each tube and the mean PA sO2 value was used for the analysis. The PA estimated sO2 values were then compared against the oximeter measured sO2 values.

#### 2.3.2. Imaging a Homogeneous Phantom

In the second experiment, the effect of fluence compensation on sO2 imaging at different depths was studied. Figure 1c shows the imaging plane where the same tube (T3) carrying blood was arranged such that it crosses the imaging plane at four different depths. An absorbing and scattering medium was used with soft tissue optical properties [42]. The optical properties of the medium are given in Table 1. Intralipid and Indian ink were used to prepare the phantom. Five liters solution was prepared in which 275 mL of 20% intralipid (Fresenius Kabi, Bad Homburg Germany) was used and 71.8 μL Indian ink (Talens, The Netherlands) was added from a stock solution (dilution factor 69,642). Using the light propagation model, fluence maps for both the wavelengths were computed. The fluence maps were used to compensate for the non-uniform illumination in the PA

images. The fluence compensated PA images were used to compute the PA sO2. PA sO2 images with and without fluence compensation were compared with the measured blood oxygen saturation from the oximeter.

#### 2.3.3. Imaging a Two Slab Phantom

In a realistic scenario, the transducer cannot touch the tissue surface because of the LED arrays placed on both sides of it in an angle. In most cases, water or a gel pad is used as a coupling medium. Oxygen saturation imaging in such a scenario is studied in this experiment with a two slab phantom. Water was used for the top slab and a soft tissue-mimicking medium for the bottom slab Figure 1d. The two mediums were separated by polyurethane US film (Protection Cover Ultrasound B. V., The Netherlands) with a thickness of 140 μm. The optical properties of the soft tissue phantom are given in Table 1. The tube arrangement used in the previous experiment at four different depths was used in this experiment as well. With the same blood in the tube, constant oxygen saturation was expected at all four depths. B-mode US imaging was performed and used to identify the two mediums. Light propagation was modeled in the two slab medium and the fluence maps were used to correct the PA images before estimating the oxygen saturation.

#### *2.4. In Vivo Imaging*

Two in vivo imaging experiments were performed. In the first experiment, fluence compensated oxygen saturation imaging was performed on a mouse. In the second experiment, in vivo oxygen saturation imaging was performed in the wrist of a healthy volunteer.

#### 2.4.1. Small Animal Imaging

For tracking the changes in the oxygen saturation of the blood in vivo, we conducted a small animal study. The animal (BALB/c nude mice) was anesthetized with Isoflurane vapor added to the breathing air through a nose cone (Figure 1f). The transducer as well as the lower part of the mice body were immersed in a warm water bath for acoustic coupling (Figure 1f). The flank region closer to the thigh was aligned to the imaging plane using US imaging. A combination LED array (750 nm and 850 nm) was used at a repetition rate of 4 kHz, and the resulting PA signal was averaged 640 times to display PA image with a frame rate of 6 Hz. At first, the animal was allowed to breathe in normal air (21% oxygen) and the oxygen saturation images were acquired. Then the breathing gas was changed to 100% oxygen and the animal was allowed to stabilize to the oxygen level for two minutes. US and PA raw RF files were saved for the entire duration. The US image was used to identify the tissue boundary. Optical properties of mouse tissue used in the light propagation model are given in Table 1. Fluence compensated oxygen saturation images were computed. The oxygen levels were compared for both levels by selecting three blood vessels as regions of interest.

#### 2.4.2. Human Imaging

In the final experiment, the wrist of a human volunteer with brown skin was imaged. An artery and vein were identified and imaged in real-time (30 Hz) using both the B-mode US and two-wavelength PA imaging. US-assisted fluence compensation was performed, and oxygen saturation images were computed. From the PA sO2 image, the artery and vein regions were analyzed to estimate the mean sO2 values by computing the mean over a region of interest.

#### **3. Results**

#### *3.1. In Vitro Validation of PA sO*<sup>2</sup>

Validation of PA sO2 imaging is presented in this section to analyze the accuracy of the two-wavelength PA sO2 against a ground truth method. Figure 2a shows two tubes having blood with three different levels of oxygen. Oximeter readings are marked next to the tubes, and the difference in oxygen saturation is visible in the PA sO2 images. To quantify the changes in oxygen saturation and validate it with oximeter readings, PA sO2 in a region of interest of the tubes are compared in Figure 2b. The graph shows the PA estimated sO2 against the measured oxygen saturation. sO2 readings before and after imaging are indicated by the error bar in the measured sO2. A linear fit to the data is shown with the dashed line. The correlation coefficient (r) between measured sO2 and PA estimated sO2 is 0.893 (*p* < 0.001), which shows that the two measurements are linearly correlated. The error in the PA estimated sO2, given by the difference between the two methods is plotted in Figure 2c. A mean error of 0.18% and a standard deviation of 10.33% was observed. Most of the error values are around the mean, and 94% of the values are within the Mean ± 2SD region, indicating that the variations are from the noise in the PA measurements.

**Figure 2.** Photoacoustic Oxygen saturation (PA sO2) validation. (**a**) PA sO2 images from three blood oxygen levels with oximeter reading marked in the image. (**b**) PA sO2 plotted against measured sO2 from oximeter readings. (**c**) Error plot showing average sO2 from oximeter values and PA sO2 plotted against difference in sO2 between the two methods.

#### *3.2. Fluence Compensated PA sO*<sup>2</sup>

In the second experiment, a homogeneous phantom was considered with soft tissue optical properties (Table 1). Fluence for both 750 and 850 nm is shown in Figure 3. The peak fluence can be observed at the LED locations in Figure 3a,b. Due to the advanced position (9.2 mm) of LED arrays on the sides of the transducer, this region in the imaging plane cannot be used for imaging tissue. We only considered the imaging plane below the LED position as the region of interest (ROI). Figure 3c shows a normalized line profile through the fluence map for the entire imaging plane and in the ROI. Figure 3d,h shows fluence map at 750 and 850 nm, respectively. Figure 3e,i shows the PA images of the same tube with blood at different depths at 750 and 850 nm, respectively. Figure 3f,j shows fluence compensated PA images. PA images show the signal from the tube located at three depths. With fluence compensation an enhancement in the PA signal from deeper locations is visible. Figure 3g shows the PA sO2 image before fluence compensation and Figure 3k shows PA sO2 image after fluence compensation.

Fluence drop of 1/*e* was observed at a depth of 3.6 mm for 750 nm and 4.2 mm for 850 nm. The maximum depth obtained in the system with the denoising filters was 10.5 mm and using the offline program it was 6 mm. The measured oximeter readings of the blood in the tube before imaging was 95.9% and after imaging it was 96.6%. The PA estimated sO2 at three tube locations (mean value) before fluence compensation were 85.8%, 83.9% and 81.7% and after fluence compensation were 99.1%, 99.3% and 99.6%. In the offline reconstruction, the fourth location of the tube was not visible. However, all four tubes were visible in the system.

**Figure 3.** Fluence compensation in a homogeneous medium. Fluence map at the imaging plane from (**a**) 750 nm and (**b**) 850 nm. (**c**) Line profile showing normalized fluence in the depth direction and the zoomed-in region showing fluence decay in the Region Of Interest (ROI), marked with dashed lines. Fluence map in the ROI with (**d**) 750 nm and (**h**) 850 nm. Uncompensated photoacoustic (PA) images at (**e**) 750 nm and (**i**) 850 nm and corresponding fluence compensated PA images (**f**,**j**). Photoacoustic oxygen saturation images (PA sO2) (**g**) before fluence compensation and (**k**) after fluence compensation.

In the third experiment, a two-slab phantom was considered. The upper region was water and the lower region a soft tissue-mimicking phantom. Figure 4a is the US image showing the film separating the two mediums. Figure 4b is the segmented image with ones for soft tissue region and zeros for water. Figure 4c shows the fluence in the medium at 850 nm. The PA image of the tube at three different depths is shown in Figure 4d. The fluence compensated PA image in Figure 4e shows the enhancement at deeper locations. Figure 4f shows the sO2 image before fluence compensation overlaid on the US image and Figure 4g is sO2 image after fluence compensation. The enhancement in the PA sO2 is visible in Figure 4g. The PA sO2 estimated by taking the mean value for the tube locations before fluence compensation were 82.4%, 76.9% and 77.7%, and fluence compensation it was 91.8%, 89.5%, and 92.1%. The oximeter readings before and after imaging were 96.8% and 97.4%, respectively.

#### *3.3. In Vivo PA sO*<sup>2</sup> *Imaging*

Results from in vivo PA sO2 imaging on mice thigh muscle is shown in Figure 5. The animal was given two levels of oxygen, switching between normal air and 100% oxygen. Figure 5a shows the US image of the thigh muscle of the mice. The fluence map at 850 nm utilizing the US segmented image is shown in Figure 5d. Figure 5b,e show fluence compensated PA sO2 images corresponding to the normal air (low cycle) and 100% oxygen (high cycle), respectively. The zoomed-in region from both low and high cycle is shown in Figure 5c,f, respectively. The difference in sO2 images is clear between normal air and

100% oxygen. To quantify the change, three regions marked by 1, 2, and 3 in Figure 5c,f were analyzed for the mean PA sO2 values. During the low cycle PA sO2 values were 72.9%, 67.8% and 67.3%, respectively, for regions 1, 2, and 3 and it changed to 93.4%, 89.2% and 85.4% during high cycle.

**Figure 4.** Ultrasound-assisted fluence compensation. (**a**) Ultrasound image of the two-layer phantom. (**b**) Binary mask obtained by segmenting ultrasound image. (**c**) Fluence map at 850 nm. (**d**) Photoacoustic (PA) images before fluence compensation at 850 nm and corresponding (**e**) fluence compensated image. Photoacoustic oxygen saturation (PA sO2) images (**f**) before fluence compensation and (**g**) after fluence compensation overlaid on the ultrasound image.

**Figure 5.** In vivo small animal imaging. (**a**) Ultrasound image of the thigh muscle of a mouse. (**b**) Fluence compensated photoacoustic oxygen saturation (PA sO2) during the low sO2 cycle overlaid on the ultrasound image. (**c**) Zoomed-in region of the PA sO2 image from the low sO2 cycle. (**d**) Fluence map at 850 nm obtained using segmented ultrasound image. (**e**) Fluence compensated PA sO2 during the high sO2 cycle, overlaid on the ultrasound image. (**f**) Zoomed-in region of the PA sO2 image from the high sO2 cycle.

In the final experiment, an in vivo imaging on the wrist of a healthy volunteer was performed. Figure 6a shows fluence compensated PA sO2 image overlaid on US image. An artery on the right side and a vein on the left side of the PA sO2 image is visible, along with the skin signal. An artery is characterized by higher sO2 while a low sO2 for the

vein. The estimated PA sO2 by selecting a region of interest in the artery was 92.6% and that of the vein was 78.7%. The difference in the sO2 between artery and vein is evident in Figure 6a. Mean PA signal acquired real-time (30 Hz) from the artery at 850 nm is presented in Figure 6b, which shows the pulsation due to the heart rate. The estimated heart rate of the subject from the PA signal was 84 beats per minute. Figure 6c shows the real-time (without fluence compensation) PA sO2 images from the system. Figure 6c shows the pulsating artery, demonstrating the real-time PA sO2 imaging capability of LED-based PA sO2 imaging.

**Figure 6.** In vivo imaging on a human wrist. (**a**) Fluence compensated photoacoustic oxygen saturation (PA sO2) image of the human wrist overlaid on an ultrasound image. (**b**) Normalized photoacoustic (PA) signal from artery showing pulsation. (**c**) PA sO2 frames showing pulsating artery. (Video S1).

#### **4. Discussion**

In recent years, LED-PAI has shown potential in various clinical and preclinical applications. However, depth-dependent fluence variation was not considered for PA sO2 calculation. An accurate fluence compensated PA sO2 for LED-PAI is expected to accelerate the clinical translation of LED-PAI. In this work, we characterized the accuracy of PA sO2 imaging by using in vitro human blood and compared the PA sO2 with conventional oximeter readings. A good correlation of 0.893 (*p* < 0.001) was achieved between measured sO2 and PA sO2. We utilized the structural information offered by US images for tissue segmentation. Further, the segmented tissue information was used in the light propagation model using Monte-Carlo simulations to compensate for fluence variations to improve the sO2 imaging accuracy. The fluence compensation was validated using two different tissuemimicking phantoms with human blood-filled tubes at different depths. The sO2 values were compared before and after the fluence compensation step. In both phantoms, blood with the same oxygenation level was present at different depths. It is evident from our results (decreasing sO2 values with depth) that it is important to account for light fluence variations to estimate sO2 accurately. After applying the fluence compensation algorithm, the PA sO2 imaging accuracy improved by 12% in the two slab phantom and 15% in the homogeneous phantom studies. The best enhancement was achieved for deeper locations in comparison to the uncompensated PA sO2 imaging. An error of 15% (when no fluence compensation was performed) is not acceptable in a clinical scenario, especially when one has to differentiate an artery and a vein (difference in sO2 for arterial and venous blood is less than 15–20%). To demonstrate the potential of fluence compensated PA sO2 imaging in vivo, we performed a small animal study in which the thigh muscle of a live mouse

was imaged. First, the mouse was set to breathe normal air (21%) and then switched to (100%) oxygen and PA sO2 was monitored to see the dynamic imaging capability. Our approach yielded expected results showing differences in sO2 values of different blood vessels (67–73% sO2 during normal air-breathing and around 85–94% during 100% oxygen breathing) with an imaging depth of approximately 6 mm. Finally, we also performed a fluence-compensated PA sO2 measurement on the wrist of a human volunteer and imaged a vein and a pulsating artery for which a difference of 14% sO2 was observed.

We foresee that the implementation of this idea in the imaging system will open up the possibility of improving the sO2 imaging capability. Since US imaging is already available in the system, this data can be used for tissue segmentation and consequently compute a fluence map specific for the tissue and compensate the PA data for sO2 estimation. We believe that our approach is suitable for any PAI system with US imaging capability. In this work, we were able to achieve a penetration depth of around 10 mm in phantom experiments when sO2 imaging was performed along with conventional US imaging in the AcousticX system. For the in vivo experiments, we achieved an imaging depth of 6 mm, which is good enough for small animal imaging and human wrist imaging that we have demonstrated. However, it is critical to improve the imaging depth to explore a wide range of clinical applications. Improving the LED array packaging, pulse repetition rate, and coded-excitation schemes may be some aspects to be considered in this direction [45]. Further, novel deep learning and image processing methods are also expected to have an impact in improving spatial resolution and imaging depth of LED-PAI [46–48]. In this work, we used a combination LED array of 750 nm and 850 nm with a pulse energy of around 200 μJ and 100 μJ per pulse, respectively. As one can see, pulse energy of 750 nm is just 100μJ, which is half of that of 850 nm, and this is the main factor hindering the imaging depth in this work. The higher amount of background noise present in the 750 nm images (when compared to 850 nm) may very well be the reason for the slight inaccuracies (especially in deeper features) in our sO2 estimates [49]. Our future work will include the design and development of new LED arrays with optimal wavelengths and improved optical energy.

Even though we achieved promising results in this proof-of-concept study, there is a lot of scope in improving the quantitative nature of PA sO2 imaging. In our proposed fluence compensation model, we segmented the phantom and tissue as two layers (water and soft tissue layer in phantom, water and muscle in mouse imaging, and water and soft tissue in human wrist imaging), which is not an optimal approach. In our future work, we consider using the US images to segment different tissue types [50] and then perform a more accurate fluence compensation for quantitative sO2 imaging. In the Monte-Carlo light propagation model, the reflection of light at different tissue layers was not considered, which will be considered in our future work. Another factor to consider in future studies is the band-limited frequency response of the detectors and its impact on the accuracy of sO2 imaging. PA sO2 is limited in predicting the absolute sO2 values, as imaging depends on multiple factors, and it is difficult to calibrate the method in live tissue. However, relative sO2 imaging is still useful in clinical applications. We foresee several applications including monitoring hypoxia in the tumor microenvironment in the future.

In summary, for the first time, we characterized and validated fluence-compensated LED-PAI based sO2 imaging using in vitro, phantom and in vivo small animal and human volunteer studies. Our results show the importance of accounting for fluence variations in tissue when performing sO2 using an LED-PAI system. Further, our results demonstrate the potential of LED-PAI in obtaining sO2 information dynamically along with conventional pulse-echo imaging. We believe that combined US and sO2 information in such high temporal and spatial resolution may be useful for diagnosis and treatment monitoring of several inflammatory and cancer-related diseases.

#### **5. Conclusions**

Through our results, we have demonstrated that oxygen saturation imaging using a dual-wavelength LED-based photoacoustic system has potential in pre-clinical and clinical applications. The proposed fluence compensation method utilizing structural information of the tissue from the US images was found to improve the oxygen saturation imaging accuracy by 12–15%. Within the limited power of the LEDs, an imaging depth of 6–8 mm was achieved in soft tissue. We have demonstrated imaging different levels of oxygenated blood in vivo in both small animal and a healthy human subject and found it to be within the expected range, showing the accuracy of the proposed approach. Further research on improving the pulse energy of LED sources to enhance imaging depth and accurate segmentation of different tissue types for fluence compensation can potentially translate this technology for pre-clinical and clinical applications.

**Supplementary Materials:** The following are available online at https://www.mdpi.com/1424-822 0/21/1/283/s1, Video S1: Real-time ultrasound and photoacoustic oxygen saturation imaging of a human wrist, showing a pulsating artery, vein, and skin layer.

**Author Contributions:** Conceptualization: M.K.A.S. and K.J.F.; Formal analysis: K.J.F.; Funding acquisition: W.S. and M.K.A.S., Investigation: K.J.F.; Methodology: R.B., M.K.A.S., M.X. and K.J.F.; Project administration: W.S.; Supervision: W.S., S.M. and K.J.F., Writing—original draft: R.B., M.K.A.S. and K.J.F.; Review and editing: M.K.A.S., W.S., R.B., M.X., S.M. and K.J.F. All authors have read and agreed to the published version of the manuscript.

**Funding:** We would like to acknowledge funding from National Centre for the Replacement Refinement and Reduction of Animals in Research (CRACKITRT-P1-3) and 4TU Precision Medicine program.

**Institutional Review Board Statement:** All animal experiments were performed in compliance with the Institutional Animal Care and Use Committee (IACUC) of Tufts University.

**Informed Consent Statement:** Informed consent was obtained from all subjects involved in the study.

**Data Availability Statement:** Data available on request from the authors.

**Acknowledgments:** We would like to thank Maura Dantuma for inputs on the light propagation model.

**Conflicts of Interest:** M.K.A.S. is employed by Cyberdyne Inc. The authors have no financial interests or conflict of interest to disclose.

#### **References**


## *Letter* **Photoacoustic Imaging as a Tool for Assessing Hair Follicular Organization**

#### **Ali Hariri 1, Colman Moore <sup>1</sup> , Yash Mantri <sup>2</sup> and Jesse V. Jokerst 1,3,4,\***


Received: 21 September 2020; Accepted: 11 October 2020; Published: 16 October 2020

**Abstract:** Follicular unit extraction (FUE) and follicular unit transplantation (FUT) account for 99% of hair transplant procedures. In both cases, it is important for clinicians to characterize follicle density for treatment planning and evaluation. The existing gold-standard is photographic examination. However, this approach is insensitive to subdermal hair and cannot identify follicle orientation. Here, we introduce a fast and non-invasive imaging technique to measure follicle density and angles across regions of varying density. We first showed that hair is a significant source of photoacoustic signal. We then selected regions of low, medium, and high follicle density and showed that photoacoustic imaging can measure the density of follicles even when they are not visible by eye. We performed handheld imaging by sweeping the transducer across the imaging area to generate 3D images via maximum intensity projection. Background signal from the dermis was removed using a skin tracing method. Measurement of follicle density using photoacoustic imaging was highly correlated with photographic determination (R2 = 0.96). Finally, we measured subdermal follicular angles—a key parameter influencing transection rates in FUE.

**Keywords:** LED-based photoacoustic imaging; hair follicles; FUE; FUT

#### **1. Introduction**

Follicular unit extraction (FUE) and follicular unit transplantation (FUT) are the gold standard surgical interventions for androgenic alopecia and account for >99% of transplant procedures [1,2]. Both of these techniques involve transplantation of healthy hair follicles from a safe donor area to a low-density region of thinning/balding [3,4]. The procedures differ in how the donor follicles are collected. In FUE, follicles are individually extracted using a handheld punching device. In FUT, a strip of scalp is resected and then dissected to obtain individual follicles [5,6]. Consequently, FUE results in less severe scarring than FUT, and is more commonly requested by patients for this reason [7]. However, FUE is a more technically demanding procedure and has a higher risk of follicle transection; in some cases, it can also lead to cyst formation or a "moth-eaten" appearance of the donor region [8].

One important element for both of these procedures is characterization of the follicular units both at the donor and implant site (pre- and post-operatively). Factors such as graft size, angle, and grafting density have a significant effect on follicular unit survival rates [9,10]. Beyond implant survival, these metrics also have implications for potential complications such as the development of subdermal cysts or telogen effluvium ("shock loss") [11,12]. Finally, variations in graft density and angle can have major impacts on cosmetic satisfaction.

Photography and/or visual inspection is the current gold-standard means of characterization during treatment planning and evaluation of results [1]. However, this practice neglects the subdermal orientation/depth of follicles, which is the primary factor governing undesirable transection rates in FUE [13]. Furthermore, these optics-based inspections cannot evaluate the follicle below the skin surface or during the "shock loss" period after transplant [14]. Indeed, subdermal characterization of the follicles at the donor and implant sites could inform the clinician on the best donor sites as well as the risk and status of downstream complications such as telogen effluvium and cysts. Therefore, we propose the use of a non-invasive imaging modality for gleaning both supra- and subdermal information about the density and orientation of follicles.

Ultrasound is a widely deployed imaging modality across medical specialties with applications in dermatology for melanoma, inflammatory diseases, and lipoablation [15]. These applications are currently being expanded as photoacoustic imaging (PAI)—an augmented form of ultrasound continues to gain clinical traction [16,17]. PAI uses pulsed light to generate ultrasound waves from the imaging target. This shifts the mechanism of imaging contrast from differences in acoustic refraction to optical absorption. When performed simultaneously with ultrasound (routine practice for commercially available systems), the modality is capable of real-time imaging with anatomical and molecular contrast through many centimeters of tissue. PAI has experienced tremendous research growth in the past decade as a diagnostic platform and is currently being investigated for a number of dermatological conditions [18–24]. Recently, a laser-based PAI system was proposed for volumetric imaging of a single follicular unit [25]. In this work, we introduce an LED-based photoacoustic imaging method for fast and non-invasive characterization of follicle density and subdermal angle with an emphasis on its diagnostic value in FUE and FUT.

#### **2. Materials and Methods**

#### *2.1. Photoacoustic Imaging System*

In this work, we used a AcousticX CYBEDYNE LED-based photoacoustic imaging system from CYBERDYNE Inc. (formerly Prexion) (Tokyo, Japan) [26]. The system is equipped with a 128-element linear array ultrasound transducer with a central frequency of 10 MHz and a bandwidth of 80.9% fitted with two LED arrays. The imaging equipment could be used with a variety of wavelengths. We used both 690 and 850 nm LED arrays in this study. We found that 690 nm provides more information about the skin layer and 850 nm shows more penetration depth.

The repetition rate of these LEDs is tunable between 1, 2, 3, and 4 KHz. The pulse width can be changed from 50 to 150 ns with a 5-ns step size. The transducer can be scanned to generate three-dimensional (3D) data using a maximum intensity projection (MIP) algorithm. The lateral and axial resolution of this imaging system is ~550 and 260 μm, respectively. This system can detect blood vessels to a depth of ~1.5 cm [26].

#### *2.2. Imaging and Data Collection*

The study enrolled a single adult healthy Caucasian male; the subject provided written informed consent. All work was conducted with approval from the UCSD Institutional Review Board and was in accordance with the ethical guidelines for human subject research set forth by the Helsinki Declaration of 1975. In order to maintain a sterile imaging environment, we used sterile CIV-Flex probe cover (CIVCO Medical Solution, Coraville, IA, USA) on the photoacoustic transducer.

The images were acquired using acoustically transparent coupling gel (clear image singles ultrasound scanning gel, Next medical product company, Somerville, NJ, USA), and we evaluated several regions of the body with varying follicle density. First, we imaged the nape of the subject's neck to include both skin and hair in the same field of view. We then imaged the parietal region of the scalp after trimming the hair with clippers (Wahl Clipper Corporation, Sterling, IL, USA). We used both ultrasound and photoacoustic mode to image the follicle density on the scalp.

In order to show that this technique can image hairs that are not visible (subdermal hairs), we imaged the abdomen at baseline, trimmed to leave a residual ~1 mm of hair (Wahl trimmer), and shaved with a razor (Gillette Mach 3, Boston, MA, USA). To further evaluate the capacity of the imaging technique to evaluate follicle density, we also imaged the subject's arm (little hair; no trimming or shaving) and face (freshly shaved). These various regions of the body were also photographed while protecting the subject's anonymity.

The imaging experiments used both 690 and 850 nm excitation wavelengths. The scanner was swept by hand at ~1 cm/s across the skin collecting both ultrasound and PA data for subsequent 3D visualization via maximum intensity projection (MIP)—a volume rendering that projects the voxels with maximum intensity in the correspondence plane [27,28]. Photographs were also taken at each imaging site for comparison of hair density.

#### *2.3. Data Analysis*

All imaging data were recorded as rf data and reconstructed using Fourier transform analysis (FTA) [29]. Images were exported and analyzed as bmp files types. The ImageJ 1.48 v toolbox was used to analyze all the data [30]. We used both B-mode cross-sections [15,31] and MIP volumes [23] to evaluate the capabilities of our LED-based photoacoustic imaging system for this application. In all images, ultrasound pixel intensities are shown in 8-bit grayscale, and photoacoustic pixel intensities are shown as hot color maps. In order to measure follicle densities, we compared the region of interest (ROI) for a photoacoustic MIP image to a photograph (2 <sup>×</sup> 2 cm<sup>2</sup> for both). To identify the follicles, we combined two metrics: photoacoustic intensity and morphology of the image. We evaluated the background photoacoustic intensity histogram, and we found that the background had a mean 8-bit pixel intensity lower than 35. Therefore, if intensity values are higher than 35, we set a threshold to define those data to be a feature—i.e., vein or follicle. Next, we could visually distinguish between vein and follicle structures. Each photoacoustic image and photograph were divided into four quadrants, and the number of follicles was counted manually. Subdermal follicular angles were measured with respect to the dermis using the angle tool function in ImageJ.

#### *2.4. Statistical Analysis*

Follicle densities for each region were quantified by averaging the counts for each quadrant of the ROI, and error bars represent the standard deviation across these four areas. We used a Student's t-test to assess statistical differences between the measurement techniques.

#### **3. Results**

The main goal of this study was to evaluate photoacoustic ultrasound as a diagnostic and staging tool for hair transplant procedures including FUE. Figure 1 shows the current and potential clinical workflow for the implementation of photoacoustic imaging to evaluate follicular organization. Figure 1A illustrates the value of photoacoustic imaging in pre-surgical staging in FUE: the images could be used to measure subdermal follicular angles (alpha and beta). Since these angles affect transection rate and can lead to more hair loss [32], a threshold angle could be established such that follicles below this value are not chosen for extraction. Follicle characterization steps are detailed in Figure 1B–E. These steps demonstrate that hair trimming and shaving followed by photoacoustic imaging can detect individual hair follicles, subdermal roots, and follicular angles. Future applications in follicular unit extraction and transplantation (FUE and FUT) are detailed in Figure 1F–J highlighting the value of imaging at the short and long-term post-implantation stages. Specifically, the implanted follicles can be visualized prior to eruption from the dermis immediately post-implantation (Figure 1G), during telogen effluvium to verify the presence of the residual papilla (Figure 1E), or periodically long-term (over 12–24 months) to monitor the formation of cysts from improper implantation (Figure 1J).

**Figure 1.** Photoacoustic imaging of follicular organization. The upper panel shows current work. (**A**) The follicular angle under the skin can be determined using photoacoustic imaging (PAI). (**B**) Hair trimming, (**D**) shaving, and subsequent (**C**,**E**) PA imaging to detect individual hair follicles, subdermal roots, and follicular angles. Future applications in follicular unit extraction and transplantation (FUE and FUT). (**F**) Hair follicles are excised from a safe donor area. (**G**) Extracted follicles are implanted in the donor spot. (**H**) Original hair shaft sheds while the dermal papilla is retained in a process called telogen effluvium ("shock loss"). PAI can quickly confirm successful implantation by imaging hair growth within the papilla before the follicle is visible on the skin surface. (**I**) Hair growth from successfully transplanted follicles. (**J**) Follicles implanted too deep can result in cyst formations 1–2 years after the procedure. PA imaging can monitor this cystic growth.

#### *3.1. Hair Follicle Imaging Using PAI*

We first used 690 nm LED light source and did some positive (hair area) and negative (skin area) control experiments to confirm that hair produced PA signal (Figure 2). We then used PAI to image below the skin, measure follicle angle, and determine follicle density (Figures 3–6). Figure 2A shows a photograph of the nape of the neck with unshaved and shaved hair. The corresponding PA image is presented in Figure 2B taken along the dashed line in Figure 2A. The PA intensity of unshaved hair in this area was 4.23-fold higher than shaved scalp. These data show that the PA signal corresponds to hair volume. Our subject had gray-brown hair color. Ford et al. demonstrated that gray and blond color generate significantly lower photoacoustic signal than black hair due to the higher melanin content in black hair [25].

**Figure 2.** Hair imaging using PAI. This experiment contains positive and negative controls to validate the imaging technique using 690 nm light source. (**A**) Photograph of unshaved and shaved portions of the scalp. Red dashed line represents the imaging plane. (**B**) Photoacoustic image of dashed line in A which includes the skin and full hair. Photoacoustic intensity of hair in full hair area is 4.23-fold higher than shaved skin. (**C**) Photograph of the nape with lower hair density than the scalp. Red dashed line represents the imaging plane in D. (**D**) Photoacoustic image along red dashed line in C. Photoacoustic intensity in this area was 2.8-fold higher than the surrounding skin. Blue dashed squares are the region of interest (ROI) used to measure the PA intensity on the skin and hair.

We further confirmed this approach by imaging an area with varying hair density. The nape of the neck has decreasing hair density along a plane moving down the spine (Figure 2C). This subject also had recent sun exposure and with red skin in areas not covered by hair. Figure 2D shows a photoacoustic image collected in the red dashed line in Figure 2C. We note the low signal for the bare skin relative to the area with hair. The freshly shaved skin has a lower signal than the nape of the neck because the nape had been exposed to sun and had more pigmentation (Figure 2D). The intensity of hair on the nape was 2.8-fold higher than the surrounding skin. We used the photoacoustic intensity line profile of the hair and measured the average full width at half maximum (FWHM) for the width of the PA profile in Figure 2B,D to be 1.02 ± 0.19 and 0.37 ± 0.09 mm, respectively.

This shows that the width of PA signal can also be an acceptable metric to quantify the hair density. Scanning a 2 <sup>×</sup> 2 cm<sup>2</sup> area took 10 s to collect the raw data, 15 s to reconstruct the data, and up to 5 min to process. Future work can integrate automated image-processing tools to streamline the workflow [33]. It is also important to mention that, since the width of transducer is small (4 cm), we are easily able to perform multiple scans from multiple directions and surfaces such as flat and curved.

Figure 3A shows the absorption spectra of melanin, oxyhemoglobin, deoxyhemoglobin, and water. All data were downloaded from http://omlc.ogi.edu/spectra/. For hemoglobin, we used molar extinction coefficient (e) from [34], and the absorption coefficient was measured using the following: 150 *gHb liter*

<sup>μ</sup>*<sup>a</sup>* <sup>=</sup> (2.303)*<sup>e</sup>* 66,500 *gHb*/*mole* . For melanin's <sup>μ</sup>*<sup>a</sup>* correspondence to skin, we used <sup>μ</sup>*<sup>a</sup>* <sup>=</sup> 1.70 <sup>×</sup> 1012 <sup>λ</sup>−3.48, where, λ is the wavelength [35]. Using the same imaging setup, we performed PAI and ultrasound on the trimmed (with no guard) region of the scalp (see Figure 5G for exact region imaged). Figure 3B–D show the photoacoustic, ultrasound, and overlay of photoacoustic and ultrasound, respectively. With PAI, individual hair follicles are seen as discrete spots on the scalp Figure 3B. Ultrasound imaging was used to measure skin thickness of 3.78 ± 0.28 mm which is within reported values of 2–6 mm for normal adult humans [36]. The skull thickness was measured to be 6.05 ± 0.64 mm which is consistent with literature values ~6.5 mm for adult men [37,38].

**Figure 3.** Trimmed scalp imaging. (**A**) Absorption spectra of melanin, oxyhemoglobin, deoxyhemoglobin, and water. Data from http://omlc.ogi.edu/spectra/. (**B**) Photoacoustic image of trimmed scalp. Individual hair follicles are represented by discrete spots on the scalp. (**C**) Ultrasound image from same plane of A including measurements of skull thickness from the ultrasound data. The skull thickness for this subject was 6.05 ± 0.64 mm. (**D**) Photoacoustic and ultrasound overlay allows us to locate hair follicles with respect to the anatomical features of the skull.

**Figure 4.** Subdermal imaging. Photoacoustic image of (**A**) unshaved and untrimmed, (**B**) trimmed, and (**C**) shaved abdominal skin using both wavelengths (690 and 850 nm). We could detect individual roots and subdermal hair strands using PAI after trimming and shaving. (**D**) We used a skin tracer algorithm to remove the photoacoustic contribution from the skin. (**E**) Photoacoustic image of shaved area after removing the photoacoustic signal of the skin via digital image processing. Photoacoustic image of the shaved region using (**F**) 690 and (**G**) 850 nm. Melanin in the skin absorbs strongly at 690 nm limiting depth penetration. Imaging at 850 nm allows deeper penetration allowing visualization of both the roots and subdermal strands.

**Figure 5.** Follicular density images. (**A**,**D**,**G**) photograph; (**B**,**E**,**H**) cross-sectional photoacoustic image along the red dashed line. (**C**,**F**,**I**) maximum intensity projection (MIP) photoacoustic images of biceps, shaved stomach, and trimmed scalp using 850 nm LED light source within the 2 <sup>×</sup> 2 cm2 area marked by red dashed rectangles, respectively. PAI shows the absence of hair on the biceps whereas subdermal hair follicles can be seen in the stomach and scalp regions. Vasculature close to the skin surface can also be imaged due to hemoglobin that absorbs at near-infrared wavelengths and produces photoacoustic signal.

**Figure 6.** Quantification of follicle density. (**A**,**B**) Photograph of 2 <sup>×</sup> 2 cm<sup>2</sup> of medium-density (abdomen) and high-density (scalp) areas. (**C**) Correlation between follicle density measurement using photoacoustic data and photograph images. High correlation (R<sup>2</sup> = 0.97) is observed between these two techniques.

#### *3.2. Subdermal Imaging*

Next, we evaluated the utility of this technique for hair that was not visible as a model for the subdermal papilla that might be present after shock loss during transplant (Figure 1E) and used the abdominal region as a model. Figure 4A shows a photoacoustic image (690 and 850 nm as wavelength) of unshaved stomach with multiple hair follicles present above the skin surface. We next removed an increasing amount of hair and first trimmed the region for a residual ~2 mm of hair present on the skin surface (Figure 4B). While the photoacoustic signal present on the skin surface decreases, the signal from the root remains. We next shaved the skin to remove all hair protruding from the skin surface while leaving the subdermal region intact. Figure 4C shows that PAI can image and detect hair follicles and their roots under the skin surface.

Previous studies show that skin surface generates strong photoacoustic signal [39]. This is because it is the first region to encounter the optical pulse and thus has the highest fluence. Skin with high amounts of melanin also has strong photoacoustic signal because melanin absorbs infrared wavelengths of light [40,41] (Figure 3A). Regardless of the source, the strong signal from skin can make detecting follicles under the skin surface challenging. Thus, we used a skin tracing technique (digital image processing) to remove signal from the skin surface to facilitate improved image formation [42,43]. Figure 4D shows the skin trace from Figure 4C. We removed the photoacoustic contribution from the skin surface to obtain Figure 4E. Figure 4F,G show the shaved stomach imaged at 690 and 850 nm, respectively. We observed that most of the 690 nm light is absorbed by melanin in the skin; hence, we cannot image the follicle under it. However, with 850 nm illumination, light is able to penetrate deeper allowing us to image the root and the follicle. The literature shows that absorption for melanin at 690 nm is ~2.07 times higher than 850 nm [44] (Figure 3A).

Some individual hair strands can be seen germinating from the roots, whereas others show discrete spots with no strands. This is expected because not all follicles are perfectly aligned to the plane of the transducer. Our transducer can cover a 3.5-cm field-of-view. The field-of-view in linear array transducers is a function of the number and the distance between the elements (pitch size). Additionally, the pitch size is ~ <sup>1</sup> 2 *fc* , where *fc* is defined as the center frequency of the ultrasound transducer [45]. Therefore, the field of view is a function of the center frequency and number of elements. Higher center frequencies with a constant number of elements will have a smaller field-of-view.

#### *3.3. Follicle Density*

Follicular density (number of follicles/cm2) is important in scouting for donor areas and assessing transplant success [8]. We evaluated the ability of the PAI technique to quantitate follicle density in various regions. For humans, the follicular density varies widely in different areas. For example, the scalp has a higher follicular density than the biceps. The average follicular units and hair density per square centimeter is 65–85 and 124–200, respectively [10]. We identified three areas with varying hair density: the biceps, (Figure 5A), shaved stomach (Figure 5D), and trimmed scalp (Figure 5G) as low, medium, and high-density areas, respectively. We imaged a 2 <sup>×</sup> 2 cm2 area represented by the red dashed rectangle in Figure 5A–C. Figure 5B,E,H show the cross sectional photoacoustic image along the red dashed line from the biceps, shaved stomach, and trimmed scalp, respectively. Figure 5C,F,I represent the MIP photoacoustic map of biceps, shaved stomach, and trimmed scalp, respectively. These MIP maps show both hair and blood vessels. Indeed, the superficial vasculature can also be detected using photoacoustic imaging due to the light absorption from hemoglobin [46]. For PA follicle density evaluation, we used the 850 nm LED light source (longer wavelength, higher penetration depth). These maps are similar to previous studies that showed vascularity using PAI [47–49].

Figure 6 further processes this raw data in Figure 5 and quantifies the follicular density along with the fourth data point from the face/beard of the same subject (photographs not included in Figure 5 for anonymity). We plotted the follicles/cm2 for both photographs and photoacoustic MIP maps for the four regions and show that the follicular densities change with skin region (Figure 6C). There was good correlation (R2 = 0.97) between follicular density measured using PAI and visual

counting from photographs. These results show that PAI can quantify hair follicle density across different regions—including in regions where the hair is not visible above the skin surface.

#### *3.4. Follicle Angle*

The transection rate in FUE can impact outcomes, and one key variable here is the follicle angle: The rate of transection increases when the angles point toward the nuchal lines [32]. We utilized PAI to measure the angle of multiple follicles (*n* = 25) relative to the skin surface on the shaved stomach. The follicular angle (β) was measured to be 18.62 ± 5.281◦ in this area (Figure 4C). To the best of our knowledge, no other study has reported the follicle angle on the stomach; values reported on the scalp vary from 10◦ to 40◦ depending on the location on the scalp [50–52].

#### **4. Discussion**

There are several error sources unique to photoacoustic imaging in this application. First, hair and skin color are key parameters here. Lighter hair and lighter skin color can generate less photoacoustic signal relative to darker skin. Darker skin has higher melanin content, absorbing more light and reducing depth penetration. Second, scanning is performed manually using a hand-held transducer, and the risk of shaking and instability can vary between operators but could be minimized with training. Image processing algorithms can also resolve and remove these artifacts [53,54]. Third, the resolution of our photoacoustic system could limit scanning of very fine hair. The resolution of our imaging system depends on the center frequency of the ultrasound transducer. As we mentioned before, our LED-based imaging system has an axial resolution of 260 μm. If the hair width is too small (lower than resolution range), then our imaging system may not able to resolve it. Similarly, if two follicles are separated by less than ~550 μm (lateral resolution) we cannot resolve both follicles. We could overcome these limitations by increasing the center frequency of the ultrasound transducer.

Previously, Ford et al. demonstrated the structural and functional analysis of hair follicles using volumetric multispectral optoacoustic tomography [25]. That work offered higher resolution images than this work, but we believe that our LED-based photoacoustic imaging system has important advantages in terms of a ~7-fold larger field of view and the use of LED light sources that are more compact, safe, cheap, and rugged.

In future work, we will image follicular features such as density and subdermal angle with the LED-based photoacoustic imaging technique and validate their prognostic value for in vivo models of FUE and FUT. We will also evaluate the technique in patients with a range of skin tones to better understand and study the limitations of the technique. We will also develop an image-processing algorithm to segment the photoacoustic images and measure the follicle density in a more automatic fashion.

#### **5. Conclusions**

In this study, we evaluated the application of photoacoustic imaging for the fast and non-invasive characterization of follicular density and subdermal angles with an emphasis on its diagnostic value for FUE and FUT. We showed that a portable, inexpensive, and low fluence LED-based imaging system has potential value for these procedures.

**Author Contributions:** A.H. conducted the experiments, data analysis, and prepared the manuscript. C.M. and Y.M. both helped to perform the literature research. J.V.J. had the idea of this article, helped in data collection, and critically revised the manuscript. All authors have read and agreed to the published version of the manuscript.

**Funding:** This study was funded by National Institutes of Health under grants R21 AG065776, R21 DE029025, and DP2 HL137187. We acknowledge NSF funding under grants 1842387 and 1937674.

**Conflicts of Interest:** The authors declare no conflict of interest.

#### **References**


**Publisher's Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

MDPI St. Alban-Anlage 66 4052 Basel Switzerland Tel. +41 61 683 77 34 Fax +41 61 302 89 18 www.mdpi.com

*Sensors* Editorial Office E-mail: sensors@mdpi.com www.mdpi.com/journal/sensors

MDPI St. Alban-Anlage 66 4052 Basel Switzerland

Tel: +41 61 683 77 34 Fax: +41 61 302 89 18