**2. Methods**

#### *2.1. Introduction to the 3D-PACT System*

Figure 1 shows a schematic of the 3D-PACT system. The system consists of four parts, which are optical illumination, photoacoustic signal acquisition, data processing, and human computer interaction. A pulsed Nd:YAG laser source (Vigour-A-100S, Ziyu, Anshan Liaoning Province, China) with the wavelength of 532 nm was used for the optical illumination. The nanosecond pulsed laser with pulse repetition rate of 20 Hz could provide a laser beam with the pulse width of 5 ns. The laser beam was coupled into the multimode fiber (Ceram Optec, Bonn, Germany, damage threshold was 9.1 mJ/mm2) located in the fiber coupler after collimating and correcting the optical path. Furthermore, the multimode fiber was divided into eight branches at the output end that were evenly distributed around the water tank. After testing, the coupling ratios of the eight sub-fibers were 9.35%, 9.25%, 9.33%, 9.34%, 9.40%, 9.43%, 9.44%, and 9.41%, respectively, and the total coupling ratio of the fiber was 74.95%. The laser beam out of each branch passed through a convex lens and a plano-convex cylinder lens to form a rectangular strip with a thickness of about 2 mm, as shown in Figure 2a,b, which shows the top view and the front view of the optical path, respectively. The convex lens collimates and corrects the laser beam, while the plano-convex cylinder lens focused the beam in the direction of thickness. Eight laser beams separately passed through the water tank and the transparent quartz bowl, which were placed as shown in Figure 2b, eventually converging in the middle of the tank to form a bright circular area that irradiated the surface of the sample. According to the experimental verification, the light transmittance of the water tank used in this system was about 98%, which was due to the acrylic material that had excellent light transmittance.

**Figure 1.** Schematic of the 3D-PACT system.

**Figure 2.** (**a**) Top view of the core optical path, (**b**) front view of the core optical path, (**c**) ultrasonic transducer and its size parameters, and (**d**) installation method and position diagram of ultrasonic transducer.

It is worth mentioning that this system was the first photoacoustic tomography system for innovatively using the structure of quartz bowl with the characteristics of light transmission and ultrasonic reflection for maintaining the photoacoustic coplanarity of the system while collecting photoacoustic signals. This means that the circular spot was in the same layer as the detection plane of ultrasonic transducer. The slim photoacoustic coplanar configuration eliminated noise from nearby planes, and enables optimal photoacoustic signal excitation and detection. The light transmittance and ultrasonic reflection efficiency are two critical parameters for a quartz bowl, which were 88.25% and 202.465%. The signal generated by the sample was reflected by the quartz bowl and detected by the dual-foci virtual point ultrasonic transducer (Olympus, Tokyo, Japan) placed vertically above the bowl with a 5 MHz central frequency and an 18 mm diameter of the detection plane, as shown in Figure 2c. The surface of the dual-foci transducer adopted the method of dual concave crafting in order to achieve foci in two perpendicular directions, respectively indicated by the letters A and B in Figure 2c. The focal length was 11.25 mm and the directional angle was 100◦ in direction A, while in direction B, the focal length was about 90.0 mm and the directional angle was 11.4◦. It can be concluded from the above parameters that the large receiving angle based on its virtual point in direction A contributed to sparse sampling, while the long focal length in direction B formed a long focal zone (≈52 mm) that determined its applicability for large imaging targets. The installation position of the ultrasonic transducer is shown in Figure 2d. In the 3D-PACT system, two custom-built virtual-point ultrasonic transducers were evenly distributed on the motor rotary table. The rotating motor performed a circular motion according to the preset speed, and the high-speed data acquisition device could complete the collection of the photoacoustic signal in the process of the motor rotation according to the timing trigger signal.

During the whole working process, the photoacoustic signals were obtained through rotating the swivel table 180◦ driven by a rotary stepping motor and moving the sample 0.1 mm vertically at a time driven by a vertical stepper motor. After amplification (Ultrasonic Transceiver, 5073PR, Olympus, Tokyo, Japan) and digitization (Data Acquisition card, ATS330, AlazarTech, Canada), the raw photoacoustic data was be sent and stored in a personal computer (PC) to reconstruct the 3D photoacoustic image. The detection signal under the large receiving angle held by the virtual point-based ultrasonic transducer used in this system could provide more sample information, which contributed to recovering images with less data. In the reconstruction algorithm, the shape of the sample could be reconstructed by the improved back-projection reconstruction algorithm. The photoacoustic system adopted LabVIEW\_2014 to implement the overall control system design, which realized the fully automatic man-machine interaction.

#### *2.2. The Reconstruction Algorithm of the 3D-PACT System*

According to the principle of photoacoustic imaging, the homogeneous wave equation can be formulized as:

$$p(\nabla^2 - \frac{1}{c^2}\frac{\partial^2}{\partial t^2})p(r\nu, t) = -p\_0(r)\frac{d\delta(t)}{dt} \tag{1}$$

where *c* is the velocity of sound and *p* is the initial sound pressure generated by the energy deposition of the pulsed laser. By solving the above wave equation, the photoacoustic signal detected by the ultrasonic transducer at any time *t* and any position *r* is obtained, which can be expressed as:

$$p(r\nu, t) = \frac{\partial}{\partial t} [\frac{1}{4\pi c^3 t} \int dr \cdot p\_0(r) \cdot \delta(t - \frac{|r\nu - r|}{c})] \tag{2}$$

The distribution of the original photoacoustic signal can be recovered by using the back-projection reconstruction algorithm [37–39]. Furthermore, the traditional back-projection reconstruction algorithm is shown in Equation (3):

$$p\_0(r) = \frac{1}{4\pi c^3} \int dS \frac{1}{t} [\frac{p(r\nu, t)}{t} - \frac{\partial p(r\nu, t)}{\partial t}]|\_{t=|r\nu-r|/c} \tag{3}$$

With the development of the traditional back-projection reconstruction algorithm, a reconstruction algorithm of a ring-shaped array photoacoustic image based on a sensitivity factor is proposed in this study. The photoacoustic signals collected by the transducer were back-projected to each pixel on the arc established by time in the imaging area, which is illustrated in Figure 3a as a schematic diagram. The aforementioned virtual point ultrasonic transducer has a large signal receiving angle (100◦, corresponding to the angle Φ in the figure), and the transducer is relatively sensitive to the central region of its receiving scope. Therefore, a variable sensitivity factor *S*(ψ) was introduced in the algorithm of image reconstruction, and it decreased with the increase of ψ, which is the angle between the connecting line of the pixel and probe and the center line of the detection area. In the process of algorithm implementation, the basic Equation (4) needed to be discretized first:

$$p\_0(r) = \sum\_{i=1}^{n} [p(r'', t) - t \frac{\partial p(r\nu, t)}{\partial t}]|\_{t = |r'' - r| / c} \tag{4}$$

where *r* is a distance parameter introduced in the process of discretization. Taking the angle of the receiving signal of the ultrasound probe into account, the back-projection reconstruction algorithm based on sensitivity factor is proposed and can be expressed as:

$$p\_0(r) = \sum\_{i=1}^{n} S(\psi)\_{(r,i)} [p(r',t) - t \frac{\partial p(r',t)}{\partial t}]|\_{t=|r'-r|/c} \tag{5}$$

**Figure 3.** (**a**) A 2D schematic of the improved back-projection reconstruction algorithm, and (**b**) 3D photoacoustic image reconstruction.

In order to perform the 3D image rendering, a large number of B-scans were loaded into Amira 5.4.3 to be superimposed and processed, as shown in Figure 3b.
