Next Article in Journal
Enhanced Timing Performance of Dual-Ended PET Detectors for Brain Imaging Using Dual-Finishing Crystal Approach
Previous Article in Journal
Effects of Fatigue on Ankle Flexor Activity and Ground Reaction Forces in Elite Table Tennis Players
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Path Tracing-Inspired Modeling of Non-Line-of-Sight SPAD Data

by
Stirling Scholes
and
Jonathan Leach
*
School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, UK
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(20), 6522; https://doi.org/10.3390/s24206522
Submission received: 13 September 2024 / Revised: 1 October 2024 / Accepted: 8 October 2024 / Published: 10 October 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Non-Line of Sight (NLOS) imaging has gained attention for its ability to detect and reconstruct objects beyond the direct line of sight, using scattered light, with applications in surveillance and autonomous navigation. This paper presents a versatile framework for modeling the temporal distribution of photon detections in direct Time of Flight (dToF) Lidar NLOS systems. Our approach accurately accounts for key factors such as material reflectivity, object distance, and occlusion by utilizing a proof-of-principle simulation realized with the Unreal Engine. By generating likelihood distributions for photon detections over time, we propose a mechanism for the simulation of NLOS imaging data, facilitating the optimization of NLOS systems and the development of novel reconstruction algorithms. The framework allows for the analysis of individual components of photon return distributions, yielding results consistent with prior experimental data and providing insights into the effects of extended surfaces and multi-path scattering. We introduce an optimized secondary scattering approach that captures critical multi-path information with reduced computational cost. This work provides a robust tool for the design and improvement of dToF SPAD Lidar-based NLOS imaging systems.

1. Introduction

Since the early demonstrations of ‘Seeing Around Corners’ [1], Non-Line of Sight (NLOS) imaging has become an active area of research; see Refs [2,3] for recent reviews. Specifically, the ability to detect and/or track and reconstruct objects beyond the line of sight of an imaging system, based on the light scattered by the objects, has applications ranging from surveillance to autonomous navigation. NLOS is commonly realized using systems comprised of two principle parts: an active imaging system, in the form of a Lidar, and a reconstruction algorithm.
The first part, the active imaging system, is responsible for measuring information about the scene. This is often performed using a direct Time of Flight (dToF) Lidar system, in which pulses of light are used to illuminate the scene whilst a synchronized detector captures the scattered light. By accumulating the arrival time of the signal photons into a histogram, the total path length of the measured photons is constrained, enabling the 3D shape of the scene to be reconstructed. Single-Photon Avalanche Diodes (SPADs), when combined with Time-to-Digital Converters (TDCs) are well-suited to this task for two reasons: first, their single-photon sensitivity allows for the detection of signal photons despite the losses due to multiple scattering events [4,5,6]; second, their ability to time-tag photon arrivals enables the inverse reconstruction algorithms required by NLOS [7,8]. Consequently, SPAD Lidar-based NLOS has been realized in a variety of regimes, such as the following: using SPAD-array cameras operating in Short-Wave InfraRed (SWIR) [9,10]; with novel SPAD triggering regimes, to reduce range ambiguity [11]; improving localization via temporal focusing [12]; using compact commercially available SPAD sensors [13,14]; and at ranges from 50 m to 1.4 km [5,15]. Whilst dToF SPAD Lidars are the most common form of NLOS imaging systems, other Lidar configurations have been demonstrated, such as Frequency-Modulated Continuous Wave (FMCW) systems based on optical combs [16], THz radiation-based systems [17], Super-conducting Nano-wire Single-Photon Detector (SNSPD)-based systems [18], non-linear wavelength conversion systems [19,20], and structured light-based systems [21].
The second part in an NLOS system is the reconstruction algorithm, which works in conjunction with the activate imaging system. The algorithm is responsible for processing the time-of-arrival information from the imaging system in such a way that the 3D position of objects can be reconstructed. A number of NLOS image-reconstruction algorithms have been presented; for instance: light cone transformations when using confocal systems [22,23]; approaches based on the first measured photon [24]; improved occlusion and jitter resistance [25,26]; reduced photon accumulation and sample point requirements [27,28]; and the reconstruction of color images [29]. Furthermore, approaches aimed at realizing real-time processing using phasor fields, low-latency algorithms, and forward projection optimization have been presented [30,31,32], with recent processing techniques including the use of machine learning algorithms [33,34].
Although NLOS has been realized in several publications, relatively little formalism has been presented to predict the performance of SPAD-based NLOS systems [15,35,36]. Prior works on simulating NLOS systems have explored various approaches, including the use of simulated 3D environments [37], simulations of scanning systems [38,39], and simulations of light cone returns [40]. However, simulating the temporal distribution of photon detections at a sensor for scenarios featuring multiple objects in room-like environments remains challenging. Furthermore, it has been shown that once the temporal distribution of photon detections is known, SPAD data can be accurately simulated [41]. Data of this type can then be used to refine the components of imaging systems, develop novel reconstruction algorithms, and create machine learning data sets for training neural networks.
In this work, we present a versatile framework for modeling the temporal distribution of photon detections in dToF Lidar systems in the context of NLOS imaging. We demonstrate a proof-of-principle simulation via the creation of a virtual environment within the Unreal Engine. The simulation is built in two steps, as shown in Figure 1a. Firstly, the attenuation and path length from the laser spot to the SPAD detector is determined for a large number of sample points in the room. Secondly, by convolving the IRF of the detector (assumed in this work to be governed by a laser pulse with a Gaussian distribution in time) with the path length and attenuation the temporal distribution of photon detections at the SPAD is simulated. Our path tracing-inspired approach is able to account for a number of important factors, including the material reflectivities of objects, their distance from sources of illumination, and object occlusion. When combined, these quantities provide likelihood distributions for photon detections as a function of time, enabling the simulation of SPAD detection likelihoods. Additionally, the use of the computational model allows the individual components of the photon return to be isolated and the impact of extended features, i.e., walls, ceilings, and floors, to be commented on. Furthermore, we examine the impact of multi-pathing, in terms of the distribution of photon detections and on computational complexity. We address the latter of these challenges by implementing an optimized secondary scattering approach, which captures the salient multi-pathing information at a reduced computational cost.

2. Problem Description

This work makes use of a ‘demonstration room’, as shown in Figure 1, featuring a simple single ‘L’ bend at the end of a corridor, to function as a corner. The room contains two objects. Firstly, a vertical ‘pillar’, colored orange in Figure 1, which runs from the floor to the ceiling. This pillar has been rotated, such that its surfaces are at 45 with respect to the walls of the room. This rotation is to examine the effect of non-orthogonal surfaces on the scattering. Secondly, the room contains a ‘box’, colored purple in Figure 1. The box primarily serves to cast a shadow into the room, to examine the effect of object occlusion on the system. Although we have used this simple room for a proof-of-principle demonstration, the implementation of our approach within a game development environment means that the large catalog of readily available 3D environments developed for games could be directly leveraged to explore NLOS in a diversity of scenarios that would be impractical to realize experimentally.
The SPAD-based dToF Lidar system is assumed to be co-located at a point in the corridor leading to the room shown in Figure 1a. The pulse laser illuminates a single point, referred to as the ‘laser hit point’ (the red dot in Figure 1a) [42,43]. Figure 1 also shows the implementation of the room within the Unreal game engine. In Figure 1b, the laser point is represented by the light bulb icon. The shadows visible in the Unreal environment are cast by this light source. Unfortunately, the lighting within the virtual environment is not sufficiently physical to be matched directly with real systems. The SPAD is assumed to be coupled to a lens system that images an area, i.e., a Field of View (FoV), as shown by the green square in Figure 1. This modeling scheme is compatible with both pixel array-type sensors [44,45,46,47,48] and single-pixel sensors, such as SNSPDs [49]. Note that throughout this work a left-handed coordinate system is used. This is to maintain consistency with the Unreal engine, which also uses a left-handed coordinate system.

3. Simulation Framework

3.1. Sampling the Scene

The scattering of light in an environment can be approximated by dividing the scene into a large number of points and determining the relationship between these points and the light source. This section outlines a framework for decomposing a 3D environment into a scalable number of points, as well as determining the properties, i.e, material, distance R, and angle to the light source. In conjunction with Figure 2a, consider a laser pulse I incident at a laser hit point ( x 1 , y 1 , z 1 ). The scattering of the pulse off of the laser hit point ( x 1 , y 1 , z 1 ), shown as red lines in Figure 2a, can be characterized by a hemispherical expansion ( 1 / R 2 ), a reflectivity coefficient ( Γ 1 ), and an angular distribution function referred to as a Bi-directional Reflectance Distribution Function (BRDF) ( B 1 ) [50]. For a point ( x j , y j , z j ) in the room, at a distance R j from the laser hit point, the amplitude A j of the pulse is
A j = I × Γ 1 α 1 B 1 ( θ 1 i , ϕ 1 i , θ 1 r , ϕ 1 r ) R j 2 .
Here, θ 1 i , ϕ 1 i denote singular incident angles, and θ 1 r , ϕ 1 r denote singular reflected angles for B 1 associated with the laser hit point. Figure 2b illustrates these angles from the perspective of B 1 ; α 1 is a normalization constant, to ensure the conservation of energy. The θ angles span a 2 π range in the local y 1 , z 1 plane. The ϕ angles span the range [ 0 , π / 2 ] defined relative to the surface normal x 1 . The BRDF takes in these angles as arguments and returns what fraction of the energy incident at a point from a specific direction θ i , ϕ i is scattered in a specific reflection direction θ r , ϕ r .
To approximate the hemispherical scattering of the laser hit point, a large number N of rays R j | j [ 1 , N ] are projected into the room from the laser hit point. These rays are realized within the Unreal Engine, using line traces. The ray length R j is defined as the Euclidean distance between the laser hit point ( x 1 , y 1 , z 1 ) and the intersection point of the ray with the environment ( x j , y j , z j ):
R j = ( x 1 x j ) 2 + ( y 1 y j ) 2 + ( z 1 z j ) 2 .
At each point ( x j , y j , z j ) where the ray R j intersects the environment a ‘lookin’ is created. A lookin is a construct that ‘looks into’ the room, and it is realized in the Unreal Engine by spawning a camera at the point ( x j , y j , z j ) aligned with the surface normal. This camera has a 180° FoV (to simulate hemispherical scattering) and a transverse resolution of N ˜ = l × p total pixels. The camera performs multiple render passes to extract the surface normal, the surface material, and the distance to the nearest surface for each pixel. Each pixel is then considered a new sample of the room, referred to as an ‘observed point’, as shown by the green lines in Figure 2a, creating a new set of N ˜ observed points ( x k o , y k o , z k o | k [ 1 , N ˜ ] ). This combination of line traces and camera spawning allows a total of N × N ˜ observation points of the room to be gathered more efficiently than using an equivalent number of line traces. Each lookin is instantiated with a reflectivity Γ j and a BRDF B j based on the material properties and the surface normal of the point ( x j , y j , z j ). For each B j , a corresponding set of local angles ( θ j i , ϕ j i , θ ^ j r , ϕ ^ j r ) is created, where θ ^ j r , ϕ ^ j r denote the reflected angles to the set N ˜ of all observed points. Throughout this work, the ^ notation is used to indicate a collection of values, such as all the angles to the collection of observed points from a given lookin. To calculate the energy sent from the laser hit point to each observed point ( x k o , y k o , z k o ), the observed points must be converted from the coordinate system of the lookin point to the coordinate system of the laser hit point ( x k l s r , y k l s r , z k l s r ). This is achieved by applying a 3-dimensional coordinate transformation matrix Ω l s r to the points:
x k l s r y k l s r z k l s r = Ω l s r x k o y k o z k o Ω l s r x j y j z j .
The transformation Ω l s r must be calculated and applied independently for each lookin, as it has an implicit dependence on the surface normal of the lookin point to which it is associated. By combining Equations (2) and (3), ϕ ^ 1 r can be derived from the dot product as
ϕ ^ j r = arccos [ n ^ x l s r ( x k l s r ) + n ^ y l s r ( y k l s r ) + n ^ z l s r ( z k l s r ) ] ( x k l s r ) 2 + ( y k l s r ) 2 + ( z k l s r ) 2 | k [ 1 , N ˜ ] ,
where n ^ l s r is the normal vector associated with the laser hit point. The negative sign is present to account for the reflection; θ j r is given by
θ ^ j r = arctan 4 y k l s r , z k l s r | k [ 1 , N ˜ ] ,
where arctan4 represents the four-quadrant arctan function.
Figure 3 illustrates the distribution of the laser energy throughout the room to all observed points N × N ˜ . For illustrative purposes, the room is sampled using a 10 × 10 grid of rays R j , resulting in N = 100 lookin points, with each lookin point sampling the room using 60 × 60 pixels for N ˜ = 3600 , resulting in a total of 360,000 observation points of the room. The left panel shows the inverse-square loss as the distance from the laser hit point increases. The center panel shows the energy distribution based on the BRDF associated with the laser hit point, which is assumed to be Lambertian, i.e., B 1 ( θ 1 i , ϕ 1 i , θ ^ 1 r , ϕ ^ 1 r ) = cos ( ϕ ^ 1 r ) . The right panel shows the combined effects of the inverse-square loss and the BRDF. For Lambertian scattering, the arccos function from Equation (4) is effectively removed.

3.2. Primary Scattering Paths

Figure 4 illustrates the implementation of the primary scattering paths. For each lookin point ( x j , y j , z j ) the local incident angles ( θ j i , ϕ j i ) from the laser point are calculated together with the reflection angles ( θ ^ j r , ϕ ^ j r ) to each observed point ( x k o , y k o , z k o | k [ 1 , N ˜ ] ). The BRDF, abbreviated as “B”, for each lookin B j is calculated, using the local angles. Some fraction of the observed points for each lookin fall within the FoV of the SPAD. The energy from this reduced set of observed points is scattered back to the sensor. This ‘final’ scattering is proportional to the BRDFs of the observed points within the FoV ( [ B k o : B n o ] ) and the paths back to the sensor [ R k s : R n s ] .
Additionally, while a lookin point cannot be spawned in a shadowed region of the room, it is possible for an observed point to lie within a shadowed region, as illustrated by the red cross in the shadowed region of Figure 4. To address this, observed points that lie in shadow must be removed from calculations involving the energy distributed from the initial laser hit point ( x 1 , y 1 , z 1 ). The calculation of the final scattering is mechanically similar to the calculations for prior scattering points. A rotation matrix Ω k o is used to transform the location of the lookin point ( x j , y j , z j ) to the coordinate system of the observed point, such that the incident angles [ θ k i : θ n i ] and [ ϕ k i : ϕ n i ] can be calculated. Additionally, the reflection angles [ θ k r : θ n r ] and [ ϕ k r : ϕ n r ] from each observed point within the FoV back to the position of the sensor are calculated. These angles are passed to the BRDF for each observed point within the SPAD FoV, which, together with the reflectivity coefficients Γ , defines the energy scattered back to the sensor.
Figure 5 shows the steps in the primary scattering scheme for two lookin points shown as gray triangles. Figure 5a shows the energy distribution for a lookin point located on the nearest wall with a Lambertian BRDF, such that B j = cos ( ϕ ^ j r ) . Figure 5b shows the energy distribution for a lookin point located on the pillar face nearest the laser hit point. For illustrative purposes, the pillar has been assigned a BRDF of B j = sin ( θ ^ j r ) , which causes it to direct the majority of its energy into a plane parallel with the floor. Furthermore, the observed points from the pillar lookin have a ‘triangular’ shape as a result of the 180 FoV of the lookin combined with the angled surface upon which it has been spawned. The green square illustrates the FoV of the SPAD. The observation points that fall within the green square are used to calculate the final bounce back to the SPAD. The left panel shows the BRDF energy distributions from their respective lookin points. In comparison to the center panel of Figure 3, the left column of Figure 5a illustrates the same cosine distribution, although it has now been rotated to align with the surface normal of the lookin point. The center panel shows the effect of the inverse square scaling on the energy distributions. The right panel shows close-in views of the SPAD FoV. These panels also illustrate the relative scattering intensity of each observed point, including the final bounce calculation. Once the final bounce has been calculated, the likelihood envelope function L p ( t ) in time t associated with the primary paths can be written as the convolution of the path length and attenuation with an IRF, as follows:
A 1 = Γ 1 C a t m R j α 1 B 1 ( θ 1 i , ϕ 1 i , θ 1 r , ϕ 1 r ) ( R j ) 2 , A 2 = Γ j C a t m R k o α j B j ( θ j i , ϕ j i , θ j r , ϕ j r ) ( R k o ) 2 , A 3 = Γ k o C a t m 2 R k s β k o B k o ( θ j i , ϕ j i , θ j r , ϕ j r ) ( R k s ) 2 , L p ( t ) = q π f 2 f n o 2 [ k : n ] I × A 1 × A 2 × A 3 × 1 σ 2 π exp 1 2 t ( R j + R k o + R k s ) σ 2 ,
where σ represents the width of the impulse response function of the SPAD to a laser pulse with a Gaussian temporal shape, q is the quantum efficiency of the SPAD detector, and C a t m is the atmospheric attenuation term within the room. The β term is a modified normalization that accounts for the single input and output vectors of the bounces associated with A ^ 3 ; f is the focal length of the collecting lens; f n o is the f-number of the collection lens; f and f n o are used together, to define the aperture of the collection optics. This aperture term is present in Equation (6), to render L p ( t ) unitless.

4. Secondary Scattering Paths

To extend the modeling to multiple bounces, a criteria that all secondary paths must pass through the associated lookin point is enforced, as illustrated in Figure 6. For a given lookin point ( x j , y j , z j ), the local incident angles are expanded to the set ( θ ^ j i , ϕ ^ j i ) to accommodate the illuminated (i.e., not in shadow) observed points ( x k o , y k o , z k o | k [ 1 , N ˜ ] ). Due to the reciprocal nature of the BRDF function, the incident angles ( θ ^ j i , ϕ ^ j i ) from the observed points to the lookin point are the equivalent of the reflection angles ( θ ^ j r , ϕ ^ j r ) from the lookin point to the set of observed points. Consequently, a coordinate system transform Ω is not required. Each of the observed points contributes a scattering component proportional to the BRDFs of the observed points ( B k o | k [ 1 , N ˜ ] ), their coefficients of reflection ( Γ k o | k [ 1 , N ˜ ] ), and the total path length ( R k o + R k j o | k [ 1 , N ˜ ] ) from the laser hit point to the lookin point.
Characterizing secondary paths using the approach shown in Figure 6 has three effects. Firstly, the longer path length traveled ( R k o + R k j o ) ensures that the contributions from secondary paths are always less than a primary path, even for cases where B k o is a perfect specular reflector. Secondly, the longer path length also ensures that the contributions from secondary paths arrive at the detector later in time than the primary paths associated with each lookin. Thirdly, the computational complexity of the problem now scales as O ( N ˜ 2 ) . For instance, the set of total path lengths T ^ j for a single lookin point from the laser hit point to the observed points within the FoV of the SPAD is given in vector form by
T ^ j = T 1 , 1 . . . T 1 , n . . . . . . . . . T n , 1 . . . T n , n = R k o . . . R n o R k o + R k j o . . . R n o + R n j o T = R k o + R k o + R k j o . . . R k o + R n o + R n j o . . . . . . . . . R n o + R k o + R k j o . . . R n o + R n o + R n j o .
In Equation (7), the T represents the transpose and the ⨁ represents the vector addition operation. Equation (7) illustrates that for each path from the lookin point to an observed point within the FoV of the SPAD, all contributions from all observed points must be considered, with analogous operations for Equation (7) existing for the BRDFs. This O ( N ˜ 2 ) dependence results in poor computational scalability, even at relatively limited sampling densities ( N , N ˜ ). The complete likelihood envelope function L ( t ) in time t can be written as the sum of the primary envelope function L p ( t ) and a secondary envelope function L s ( t ) ,
A ^ 1 = Γ 1 C a t m R k o α 1 B 1 ( θ 1 i , ϕ 1 i , θ ^ 1 r , ϕ ^ 1 r ) ( R k o ) 2 | k [ 1 , N ˜ ] , A ^ 2 = Γ k C a t m R k j o β k B k ( θ k i , ϕ k i , θ k r , ϕ k r ) ( R k j o ) 2 | k [ 1 , N ˜ ] , A ^ 3 = Γ j C a t m R k o α j B j ( θ ^ j i , ϕ ^ j i , θ j r , ϕ j r ) ( R k o ) 2 , A 4 = Γ k o C a t m 2 R k s β k o B k o ( θ j i , ϕ j i , θ j r , ϕ j r ) ( R k s ) 2 , L s ( t ) = q π f 2 f n o 2 [ k : n ] [ k : n ] I × A 1 ^ × A 2 ^ × A 3 ^ × A 4 × 1 σ 2 π exp 1 2 t T ^ j σ 2 , L ( t ) = L p ( t ) + L s ( t ) .
Equation (8) is an extended form of Equation (6), to accommodate the additional bounce and O ( N ˜ 2 ) nature of the secondary paths.
Figure 7 shows the likelihood envelope function L ( t ) plotted on a log scale for a single laser hit point. This distribution also represents the histogram distribution that would be obtained by a SPAD sensor in the limit of infinite exposure time. To explore the impact of sampling density, the simulation was performed at two different resolutions. First, using a 10 × 10 grid of rays R j , with each of the 100 lookin points sampling the room on a 60 × 60 grid. Second, using a 45 × 45 grid of rays R j , with each of the 2025 lookin points again using a 60 × 60 sampling grid. For illustrative purposes, the effective aperture of the lens system in Equation (8) is assumed to have a radius of 20 mm. Additionally, the values of q, Γ , and C a t m are all assumed to be one with σ = 250 ps. The likelihood envelope function L ( t ) is the uppermost solid black line. The color-coded lines illustrate the contributions to the total envelope of each feature of the room—specifically, the walls, roof, and floor (blue), the box (purple), the pillar (orange), and the secondary bounce (red). Examining the purple and orange components of Figure 7, it can be seen that the signal returned from the pillar generally arrives both with a lower amplitude and a greater delay than the signal returned from the box, which is consistent with the pillar being further from the laser hit point than the box. Furthermore, these observations are consistent with the prior experimental works presented in Refs [5,6,51,52]. This consistency, together with the known feasibility of the likelihood approach to SPAD modeling [53], suggests that the approach presented here will agree with and could be used to model future SPAD-based NLOS experiments. However, complete validation of the proposed model will require quantitative comparisons to experiments featuring cluttered and dynamic environments, something beyond the immediate scope of this work. Additionally, examining only the pillar and box components of the distribution is consistent with prior experiments in which extended surfaces, i.e., the walls, floor, and roof of the room, are negligible. These extended surfaces are negligible in cases where either the materials of the extended surfaces are much less reflective than the objects in the room [18,25] or the distance between the objects in the room and the extended surfaces is large, relative to the distance between the object and the laser hit point [5,26]. When these conditions are not met, the total return is dominated by the reflections from extended surfaces, as shown by the agreement between the dashed blue line of the room reflections and the solid black line of the total envelope. The red line in Figure 7 confirms that the secondary paths make only a minor contribution to the total envelope, which is consistent with the energy loss due to additional scattering events. Figure 7 also shows that while denser sampling of the room smooths and somewhat broadens the envelope functions the overall profile remains relatively consistent. Finally, detection distributions of the type shown in Figure 7 are compatible with existing SPAD Lidar simulation techniques for generating physically realistic data [41].

Optimized Secondary Scattering Paths

Figure 8 illustrates the implementation of the optimized secondary paths scheme. The scheme is identical to that described in Section 4; however, by only considering a subset of secondary paths, the computational complexity can be significantly reduced. This creates a scalable approach, in which the number of secondary paths considered for each lookin point can be adjusted based on the computational resources available and the density of room sampling required. Here, we demonstrate a single secondary path, i.e., once the secondary paths have been calculated, only the maximum secondary path is retained for the remaining calculations. This scheme significantly reduces the computational complexity of the simulation by returning the model to an O ( N ) problem. Specifically, for each path from the lookin point to an observed point within the FoV of the SPAD, only a single contribution from all observed points is considered, i.e., the transposed vector from Equation (7) is reduced from a vector to the singular value R k o + R k j o .
Additionally, for BRDFs that are independent of the incoming angles of the incident radiation, such as Lambertian scattering, the optimized path scheme has the effect of transforming the secondary envelope function L s ( t ) into a time-delayed and lower-amplitude copy of the primary envelope function L p ( t ) . This is because the scattering from the lookin point is agnostic to the source of the incident radiation, i.e., a single incident ray from the laser hit point and a single incident ray from a secondary path are scattered through the same set of reflection angles. The additional bounce and the path length associated with the secondary path, therefore, adjust only the amplitude and the arrival time of the secondary path component but not the overall shape of the envelope.
Figure 9 shows the likelihood envelope function L ( t ) in the optimized secondary scattering scheme. For comparability with Figure 7, the same plotting conventions have been used. Figure 9 confirms that the optimized secondary path scheme retains features of the secondary path envelope L s ( t ) without effecting the primary envelope. Furthermore, since the total number of secondary paths being considered is reduced, the maximum amplitude of the secondary path envelope L s ( t ) is reduced, as is its total width. However, the position of the peak of L s ( t ) is unchanged, since the optimized scheme retains the most significant secondary scattering events. The precise increase in computation speed when using the optimized scattering approach depends on the total number of sample points being used. For the 10 × 10 × 60 × 60 case, a reduction in computation time from ≈ 1 hour to <30 s is observed. This increased processing speed allows for either the testing of more environments in a given time or the same environment to be sampled more densely in the same time.

5. Conclusions

This work presents a versatile framework for modeling the temporal distribution of photon detections in a dToF Lidar system. We have demonstrated a proof-of-principle simulation by means of a demonstration room within the Unreal Engine. Our approach is able to account for a number of important factors in the scene, and it provides likelihood distributions for photon detections as a function of time, thus enabling the further optimizing of NLOS imaging systems and the development of novel reconstruction algorithms. By analyzing individual components of this distribution, we explore the impact of extended surfaces. We examine the effects of multi-pathing and sampling density, demonstrating an optimized secondary scattering approach that captures the salient multi-pathing information at a reduced computational cost. We expect this work to be a useful addition to the dToF Lidar community by assisting in the design of NLOS imaging systems.

Author Contributions

Conceptualization, S.S. and J.L.; Software, S.S.; Investigation, S.S.; Writing—original draft, S.S.; Writing—review & editing, S.S. and J.L.; Supervision, J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the Defence Science Technologies Laboratory through Project Dstlx-1000147352 and Engineering and Physical Sciences Research Council projects EP/T00097X/1 and EP/S026428/1.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

We have made the data used in this work publicly available at https://github.com/HWQuantum/NLOS-SIM, (accessed on 12 September 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Steinvall, O.; Elmqvist, M.; Larsson, H. See around the corner using active imaging. In Proceedings of the Conference on Electro-Optical Remote Sensing, Photonic Technologies, and Applications V, Prague, Czech Republic, 19–22 September 2011; SPIE: Bellingham, WA, USA, 2011; Volume 8186. [Google Scholar] [CrossRef]
  2. Faccio, D.; Velten, A.; Wetzstein, G. Non-line-of-sight imaging. Nat. Rev. Phys. 2020, 2, 318–327. [Google Scholar] [CrossRef]
  3. Geng, R.X.; Hu, Y.; Chen, Y. Recent Advances on Non-Line-of-Sight Imaging: Conventional Physical Models, Deep Learning, and New Scenes. Apsipa Trans. Signal Inf. Process. 2022, 11, 48. [Google Scholar] [CrossRef]
  4. Laurenzis, M.; Christnacher, F.; Klein, J.; Hullin, M.B.; Velten, A. Study of single photon counting for non-line-of-sight vision. In Proceedings of the Conference on Advanced Photon Counting Techniques IX, Baltimore, MD, USA, 22–23 April 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9492. [Google Scholar] [CrossRef]
  5. Chan, S.; Warburton, R.E.; Gariepy, G.; Leach, J.; Faccio, D. Non-line-of-sight tracking of people at long range. Opt. Express 2017, 25, 10109–10117. [Google Scholar] [CrossRef]
  6. Gariepy, G.; Tonolini, F.; Henderson, R.; Leach, J.; Faccio, D. Detection and tracking of moving objects hidden from view. Nat. Photonics 2016, 10, 23–26. [Google Scholar] [CrossRef]
  7. Laurenzis, M.; Christnacher, F.; Velten, A. Study of a dual mode SWIR active imaging system for direct imaging and non-line of sight vision. In Proceedings of the Conference on Laser Radar Technology and Applications XX and Atmospheric Propagation XII, Baltimore, MD, USA, 20–24 April 2015; SPIE: Bellingham, WA, USA, 2015; Volume 9465. [Google Scholar] [CrossRef]
  8. Brooks, J.; Faccio, D. A single-shot non-line-of-sight range-finder. Sensors 2019, 19, 4820. [Google Scholar] [CrossRef]
  9. Laurenzis, M.; Klein, J.; Christnacher, F. Transient light imaging laser radar with advanced sensing capabilities: Reconstruction of arbitrary light in flight path and sensing around a corner. In Proceedings of the Conference on Laser Radar Technology and Applications XXI, Anaheim, CA, USA, 9–13 April 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10191. [Google Scholar] [CrossRef]
  10. Laurenzis, M.; La Manna, M.; Buttafava, M.; Tosi, A.; Nam, J.H.; Gupta, M.; Velten, A. Advanced Active Imaging with Single Photon Avalanche Diodes. In Proceedings of the Conference on Emerging Imaging and Sensing Technologies for Security and Defence III; and Unmanned Sensors, Systems, and Countermeasures, Berlin, Germany, 10–13 September 2018; SPIE: Bellingham, WA, USA, 2018; Volume 10799. [Google Scholar] [CrossRef]
  11. Zhao, J.X.; Gramuglia, F.; Keshavarzian, P.; Toh, E.H.; Tng, M.; Lim, L.; Dhulla, V.; Quek, E.; Lee, M.J.; Charbon, E. A Gradient-Gated SPAD Array for Non-Line-of-Sight Imaging. IEEE J. Sel. Top. Quantum Electron. 2024, 30, 10. [Google Scholar] [CrossRef]
  12. Pediredla, A.; Dave, A.; Veeraraghavan, A.; IEEE. SNLOS: Non-line-of-sight Scanning through Temporal Focusing. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Tokyo, Japan, 15–17 May 2019; IEEE International Conference on Computational Photography: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  13. Callenberg, C.; Shi, Z.; Heide, F.; Hullin, M.B. Low-Cost SPAD Sensing for Non-Line-Of-Sight Tracking, Material Classification and Depth Imaging. Acm Trans. Graph. 2021, 40, 12. [Google Scholar] [CrossRef]
  14. Wu, J.; Yu, C.; Zeng, J.W.; Dai, C.; Xu, F.H.; Zhang, J. Miniaturized time-correlated single-photon counting module for time-of-flight non-line-of-sight imaging applications. Rev. Sci. Instrum. 2024, 95, 6. [Google Scholar] [CrossRef]
  15. Wu, C.; Liu, J.J.; Huang, X.; Li, Z.P.; Yu, C.; Ye, J.T.; Zhang, J.; Zhang, Q.; Dou, X.K.; Goyal, V.K.; et al. Non-line-of-sight imaging over 1.43 km. Proc. Natl. Acad. Sci. USA 2021, 118, e2024468118. [Google Scholar] [CrossRef]
  16. Huang, X.; Ye, R.L.; Li, W.W.; Zeng, J.W.; Lu, Y.C.; Hu, H.Q.; Zhou, Y.J.; Hou, L.; Li, Z.P.; Jiang, H.F.; et al. Non-Line-of-Sight Imaging and Vibrometry Using a Comb-Calibrated Coherent Sensor. Phys. Rev. Lett. 2024, 132, 6. [Google Scholar] [CrossRef]
  17. Cui, Y.R.; Trichopoulos, G.C. Seeing Around Obstacles Using Active Terahertz Imaging. IEEE Trans. Terahertz Sci. Technol. 2024, 14, 433–445. [Google Scholar] [CrossRef]
  18. Feng, Y.F.; Cui, X.Y.; Meng, Y.; Yin, X.J.; Zou, K.; Hao, Z.F.; Yang, J.Y.; Hu, X.L. Non-line-of-sight imaging at infrared wavelengths using a superconducting nanowire single-photon detector. Opt. Express 2023, 31, 42240–42254. [Google Scholar] [CrossRef]
  19. Wang, B.; Zheng, M.Y.; Han, J.J.; Huang, X.; Xie, X.P.; Xu, F.H.; Zhang, Q.; Pan, J.W. Non-Line-of-Sight Imaging with Picosecond Temporal Resolution. Phys. Rev. Lett. 2021, 127, 6. [Google Scholar] [CrossRef]
  20. Zhu, S.Y.; Sua, Y.M.; Rehain, P.; Huang, Y.P. Single photon imaging and sensing of highly obscured objects around the corner. Opt. Express 2021, 29, 40865–40877. [Google Scholar] [CrossRef]
  21. Wang, Z.W.; Li, X.Y.; Pu, M.B.; Chen, L.W.; Zhang, F.; Zhang, Q.; Zhao, Z.B.; Yang, L.F.; Guo, Y.H.; Luo, X.A. Vectorial-Optics-Enabled Multi-View Non-Line-Of-Sight Imaging with High Signal-To-Noise Ratio. Laser Photonics Rev. 2024, 18, 10. [Google Scholar] [CrossRef]
  22. O’Toole, M.; Lindell, D.B.; Wetzstein, G.; Assoc Comp, M. Confocal Non-line-of-sight Imaging. In Proceedings of the Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques (SIGGRAPH) Conference, Vancouver, BC, Canada, 12–16 August 2018; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  23. O’Toole, M.; Lindell, D.B.; Wetzstein, G.; Assoc Comp, M. Real-time Non-line-of-sight Imaging. In Proceedings of the Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques (SIGGRAPH) Conference, Vancouver, BC, Canada, 12–16 August 2018; Association for Computing Machinery: New York, NY, USA, 2018. [Google Scholar] [CrossRef]
  24. Tsai, C.Y.; Kutulakos, K.N.; Narasimhan, S.G.; Sankaranarayanan, A.C.; IEEE. The Geometry of First-Returning Photons for Non-Line-of-Sight Imaging. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE Conference on Computer Vision and Pattern Recognition: New York, NY, USA, 2017; pp. 2336–2344. [Google Scholar] [CrossRef]
  25. Heide, F.; O’Toole, M.; Zang, K.; Lindell, D.; Diamond, S.; Wetzstein, G. Non-line-of-sight Imaging with Partial Occluders and Surface Normals. Acm Trans. Graph. 2019, 38, 10. [Google Scholar] [CrossRef]
  26. Wang, D.J.; Hao, W.; Tian, Y.Y.; Xu, W.H.; Tian, Y.; Cheng, H.H.; Chen, S.M.; Zhang, N.; Zhu, W.H.; Su, X.Q. Enhancing the spatial resolution of time-of-flight based non-line-of-sight imaging via instrument response function deconvolution. Opt. Express 2024, 32, 12303–12317. [Google Scholar] [CrossRef]
  27. Liu, J.J.; Zhou, Y.J.; Huang, X.; Li, Z.P.; Xu, F.H. Photon-Efficient Non-Line-of-Sight Imaging. IEEE Trans. Comput. Imaging 2022, 8, 639–650. [Google Scholar] [CrossRef]
  28. Rapp, J.; Saunders, C.; Tachella, J.; Murray-Bruce, J.; Altmann, Y.; Tourneret, J.Y.; McLaughlin, S.; Dawson, R.M.A.; Wong, F.N.C.; Goyal, V.K. Seeing around corners with edge-resolved transient imaging. Nat. Commun. 2020, 11, 10. [Google Scholar] [CrossRef]
  29. Musarra, G.; Lyons, A.; Conca, E.; Altmann, Y.; Villa, F.; Zappa, F.; Padgett, M.J.; Faccio, D. Non-Line-of-Sight Three-Dimensional Imaging with a Single-Pixel Camera. Phys. Rev. Appl. 2019, 12, 6. [Google Scholar] [CrossRef]
  30. Liu, X.C.; Bauer, S.; Velten, A. Phasor field diffraction based reconstruction for fast non-line-of-sight imaging systems. Nat. Commun. 2020, 11, 13. [Google Scholar] [CrossRef] [PubMed]
  31. Nam, J.H.; Brandt, E.; Bauer, S.; Liu, X.C.; Renna, M.; Tosi, A.; Sifakis, E.; Velten, A. Low-latency time-of-flight non-line-of-sight imaging at 5 frames per second. Nat. Commun. 2021, 12, 10. [Google Scholar] [CrossRef]
  32. Pei, C.Q.; Zhang, A.K.; Deng, Y.; Xu, F.H.; Wu, J.M.; Li, D.U.L.; Qiao, H.; Fang, L.; Dai, Q.H. Dynamic non-line-of-sight imaging system based on the optimization of point spread functions. Opt. Express 2021, 29, 32349–32364. [Google Scholar] [CrossRef]
  33. Musarra, G.; Caramazza, P.; Turpin, A.; Lyons, A.; Higham, C.F.; Murray-Smith, R.; Faccio, D. Detection, identification, and tracking of objects hidden from view with neural networks. In Proceedings of the Conference on Advanced Photon Counting Techniques XIII, Baltimore, MD, USA, 14–18 April 2019; SPIE: Bellingham, WA, USA, 2019; Volume 10978. [Google Scholar] [CrossRef]
  34. Tu, M.; Yan, Q.R.; Zheng, Y.J.; Xiong, X.C.; Zou, Q.; Dai, Q.L.; Lu, X.Q. Poisson Noise Suppression for Single- Photon Non- Line-of- Sight Imaging Based on Deep Learning. Laser Optoelectron. Prog. 2023, 60, 8. [Google Scholar] [CrossRef]
  35. Buttafava, M.; Zeman, J.; Tosi, A.; Eliceiri, K.; Velten, A. Non-line-of-sight imaging using a time-gated single photon avalanche diode. Opt. Express 2015, 23, 20997–21011. [Google Scholar] [CrossRef]
  36. O’Toole, M.; Heide, F.; Lindell, D.B.; Zang, K.; Diamond, S.; Wetzstein, G. Reconstructing transient images from single-photon sensors. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1539–1547. [Google Scholar]
  37. Royo, D.; Garcia, J.; Luesia-Lahoz, P.; Marco, J.; Gutierrez, D.; Muñoz, A.; Jarabo, A. Non-Line-of-Sight Transient Rendering. In Proceedings of the SIGGRAPH Conference, Vancouver, BC, Canada, 7–11 August 2022; Association for Computing Machinery: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  38. Tan, J.J.; Su, X.Q.; Wang, K.D.; Wu, J.Y. Modeling and simulation analysis of a non-line-of-sight infrared laser imaging system. In Proceedings of the 4th International Conference on Photonics and Optical Engineering, Xi’an, China, 14–17 October 2020; SPIE: Bellingham, WA, USA, 2020; Volume 11761. [Google Scholar] [CrossRef]
  39. Tan, J.J.; Su, X.Q.; Wu, J.Y.; Wei, Z.Q. Simulation of NLOS (non-line-of-sight) 3D imaging system. In Proceedings of the Annual Conference of the Chinese-Society-for-Optical-Engineering (CSOE) on Applied Optics and Photonics China (AOPC)—Laser Components, Systems, and Applications, Beijing, China, 4–6 June 2017; SPIE: Bellingham, WA, USA, 2017; Volume 10457. [Google Scholar] [CrossRef]
  40. Zhu, W.H.; Tan, J.J.; Ma, C.W.; Su, X.Q. Simulation of non-line-of-sight imaging system based on the light-cone transform. In Proceedings of the 4th International Conference on Photonics and Optical Engineering, Xi’an, China, 14–17 October 2020; SPIE: Bellingham, WA, USA, 2021; Volume 11761. [Google Scholar] [CrossRef]
  41. Scholes, S.; Mora-Martín, G.; Zhu, F.; Gyongy, I.; Soan, P.; Leach, J. Fundamental limits to depth imaging with single-photon detector array sensors. Sci. Rep. 2023, 13, 176. [Google Scholar] [CrossRef] [PubMed]
  42. Velten, A.; Willwacher, T.; Gupta, O.; Veeraraghavan, A.; Bawendi, M.G.; Raskar, R. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat. Commun. 2012, 3, 745. [Google Scholar] [CrossRef]
  43. Velten, A.; Wu, D.; Jarabo, A.; Masia, B.; Barsi, C.; Joshi, C.; Lawson, E.; Bawendi, M.; Gutierrez, D.; Raskar, R. Femto-photography: Capturing and visualizing the propagation of light. ACM Trans. Graph. (ToG) 2013, 32, 1–8. [Google Scholar] [CrossRef]
  44. Henderson, R.K.; Johnston, N.; Chen, H.; Li, D.D.U.; Hungerford, G.; Hirsch, R.; McLoskey, D.; Yip, P.; Birch, D.J. A 192 × 128 time correlated single photon counting imager in 40nm CMOS technology. In Proceedings of the ESSCIRC 2018-IEEE 44th European Solid State Circuits Conference (ESSCIRC), Dresden, Germany, 3–6 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 54–57. [Google Scholar]
  45. Henderson, R.K.; Johnston, N.; Della Rocca, F.M.; Chen, H.; Li, D.D.U.; Hungerford, G.; Hirsch, R.; Mcloskey, D.; Yip, P.; Birch, D.J. A 192x128 Time Correlated SPAD Image Sensor in 40-nm CMOS Technology. IEEE J. Solid-State Circuits 2019, 54, 1907–1916. [Google Scholar] [CrossRef]
  46. Hutchings, S.W.; Johnston, N.; Gyongy, I.; Al Abbas, T.; Dutton, N.A.; Tyler, M.; Chan, S.; Leach, J.; Henderson, R.K. A reconfigurable 3-D-stacked SPAD imager with in-pixel histogramming for flash LIDAR or high-speed time-of-flight imaging. IEEE J. Solid-State Circuits 2019, 54, 2947–2956. [Google Scholar] [CrossRef]
  47. Gyongy, I.; Martin, G.M.; Turpin, A.; Ruget, A.; Halimi, A.; Henderson, R.; Leach, J. High-speed vision with a 3D-stacked SPAD image sensor. In Proceedings of the Advanced Photon Counting Techniques XV, Online Only, 12–17 April 2021; SPIE: Bellingham, WA, USA, 2021; Volume 11721, p. 1172105. [Google Scholar]
  48. Gyongy, I.; Erdogan, A.T.; Dutton, N.A.; Martín, G.M.; Gorman, A.; Mai, H.; Della Rocca, F.M.; Henderson, R.K. A Direct Time-of-flight Image Sensor with in-pixel Surface Detection and Dynamicc Vision. IEEE J. Sel. Top. Quantum Electron. 2023, 30, 3800111. [Google Scholar]
  49. Taylor, G.G.; McCarthy, A.; Korzh, B.; Beyer, A.D.; Morozov, D.; Briggs, R.M.; Allmaras, J.P.; Bumble, B.; Shaw, M.D.; Hadfield, R.H.; et al. Long-range depth imaging with 13ps temporal resolution using a superconducting nanowire singlephoton detector. In Proceedings of the CLEO: Science and Innovations, Washington, DC, USA, 10–15 May 2020; Optical Society of America: Washington, DC, USA, 2020; p. SM2M-6. [Google Scholar]
  50. Nicodemus, F.E.; Richmond, J.C.; Hsia, J.J.; Ginsberg, I.W.; Limperis, T. Geometrical considerations and nomenclature for reflectance. NBS Monogr. 1992, 160, 4. [Google Scholar]
  51. Laurenzis, M.; Klein, J.; Bacher, E.; Metzger, N. Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths. Opt. Lett. 2015, 40, 4815–4818. [Google Scholar] [CrossRef]
  52. Caramazza, P.; Boccolini, A.; Buschek, D.; Hullin, M.; Higham, C.F.; Henderson, R.; Murray-Smith, R.; Faccio, D. Neural network identification of people hidden from view with a single-pixel, single-photon detector. Sci. Rep. 2018, 8, 11945. [Google Scholar] [CrossRef]
  53. Scholes, S.; Wade, E.; McCarthy, A.; Garcia, J.; Tobin, R.; Soan, P.; Buller, G.; Leach, J. A robust framework for modelling long range dToF SPAD Lidar performance. Soon 2024. [Google Scholar] [CrossRef]
Figure 1. The demonstration room used throughout this work: (a) The components of the room together with their sizes. The simulation relies on determining the attenuation and path length from the laser spot to the SPAD detector. This information is then combined with an Impulse Response Function (IRF) to create temporal distributions of photon detections. (b) The room within the Unreal Engine. Throughout this work, the nearest corridor wall has been removed in graphics, to aid in visualization, but is accounted for in all calculations. Note the use of a left-hand coordinate system to ensure compatibility between the Unreal environment and the later data-processing steps.
Figure 1. The demonstration room used throughout this work: (a) The components of the room together with their sizes. The simulation relies on determining the attenuation and path length from the laser spot to the SPAD detector. This information is then combined with an Impulse Response Function (IRF) to create temporal distributions of photon detections. (b) The room within the Unreal Engine. Throughout this work, the nearest corridor wall has been removed in graphics, to aid in visualization, but is accounted for in all calculations. Note the use of a left-hand coordinate system to ensure compatibility between the Unreal environment and the later data-processing steps.
Sensors 24 06522 g001
Figure 2. The ray projection scheme from the laser’s incident point to a lookin point: (a) The incident laser pulse I and one of its scattered rays R j are shown in red. Three scattering paths and observed points are shown in green, originating from a single lookin point. The red cross indicates an observed point, which is in shadow. (b) The ϕ 1 r and θ 1 r angles, respectively. The θ 1 r angle spans a 2 π range in the local y 1 , z 1 plane. The ϕ 1 r angle spans the range [ 0 , π / 2 ] defined relative to the surface normal x 1 .
Figure 2. The ray projection scheme from the laser’s incident point to a lookin point: (a) The incident laser pulse I and one of its scattered rays R j are shown in red. Three scattering paths and observed points are shown in green, originating from a single lookin point. The red cross indicates an observed point, which is in shadow. (b) The ϕ 1 r and θ 1 r angles, respectively. The θ 1 r angle spans a 2 π range in the local y 1 , z 1 plane. The ϕ 1 r angle spans the range [ 0 , π / 2 ] defined relative to the surface normal x 1 .
Sensors 24 06522 g002
Figure 3. The distribution of the laser energy from the laser hit point throughout the room. The first panel shows the 1 / R 2 propagation loss while the second panel shows the Lambertian scattering distribution. The third panel shows the product of the two loss mechanisms, illustrating which points in the room receive the largest fraction of direct laser illumination.
Figure 3. The distribution of the laser energy from the laser hit point throughout the room. The first panel shows the 1 / R 2 propagation loss while the second panel shows the Lambertian scattering distribution. The third panel shows the product of the two loss mechanisms, illustrating which points in the room receive the largest fraction of direct laser illumination.
Sensors 24 06522 g003
Figure 4. The implementation of the primary scattering paths. For each lookin point ( x j , y j , z j ) the local incident angles ( θ j i , ϕ j i ) from the laser point are calculated. Additionally, the reflection angles ( θ ^ j r , ϕ ^ j r ) to each observed point ( x k o , y k o , z k o | k [ 1 , N ˜ ] ) are calculated. The BRDF, abbreviated as “B", for each lookin B j is calculated using the local angles. Some fraction of the observed points for each lookin fall within the FoV of the SPAD. The energy from this reduced set of observed points is scattered back to the sensor. This ‘final’ scattering is proportional to the BRDFs and Γ ’s of the observed points within the FoV ( [ B k o : B n o ] ) as well as the path lengths back to the sensor R k s | k [ 1 , n ] .
Figure 4. The implementation of the primary scattering paths. For each lookin point ( x j , y j , z j ) the local incident angles ( θ j i , ϕ j i ) from the laser point are calculated. Additionally, the reflection angles ( θ ^ j r , ϕ ^ j r ) to each observed point ( x k o , y k o , z k o | k [ 1 , N ˜ ] ) are calculated. The BRDF, abbreviated as “B", for each lookin B j is calculated using the local angles. Some fraction of the observed points for each lookin fall within the FoV of the SPAD. The energy from this reduced set of observed points is scattered back to the sensor. This ‘final’ scattering is proportional to the BRDFs and Γ ’s of the observed points within the FoV ( [ B k o : B n o ] ) as well as the path lengths back to the sensor R k s | k [ 1 , n ] .
Sensors 24 06522 g004
Figure 5. (a) The energy distribution for a lookin point on the nearest wall, shown as a gray triangle, with a Lambertian BRDF. (b) The energy distribution for a lookin point on the pillar, shown as a gray triangle, with a sin ( θ ) BRDF. The green square illustrates the FoV of the SPAD. The observation points that fall within the green square are used to calculate the final bounce back to the SPAD. Left column: the BRDF energy distributions from their respective lookin points. Center column: the effect of the inverse square scaling on the energy distributions. Right column: close-in views of the SPAD FoV. These panels also illustrate the relative scattering intensity of each observed point including the final bounce calculation.
Figure 5. (a) The energy distribution for a lookin point on the nearest wall, shown as a gray triangle, with a Lambertian BRDF. (b) The energy distribution for a lookin point on the pillar, shown as a gray triangle, with a sin ( θ ) BRDF. The green square illustrates the FoV of the SPAD. The observation points that fall within the green square are used to calculate the final bounce back to the SPAD. Left column: the BRDF energy distributions from their respective lookin points. Center column: the effect of the inverse square scaling on the energy distributions. Right column: close-in views of the SPAD FoV. These panels also illustrate the relative scattering intensity of each observed point including the final bounce calculation.
Sensors 24 06522 g005
Figure 6. The implementation of secondary scattering paths. For each lookin point ( x j , y j , z j ) the local incident angles are expanded to a set ( θ ^ j i , ϕ ^ j i ) corresponding to each illuminated (i.e., not in shadow) observed point ( x k o , y k o , z k o | k [ 1 , N ˜ ] ). Each of these observed points contributes a scattering component proportional to the BRDFs of the observed points ( B k o | k [ 1 , N ˜ ] ), their coefficients of reflection Γ k o | k [ 1 , N ˜ ] , and the total path length R k o + R k j o | k [ 1 , N ˜ ] from the laser hit point to the lookin point.
Figure 6. The implementation of secondary scattering paths. For each lookin point ( x j , y j , z j ) the local incident angles are expanded to a set ( θ ^ j i , ϕ ^ j i ) corresponding to each illuminated (i.e., not in shadow) observed point ( x k o , y k o , z k o | k [ 1 , N ˜ ] ). Each of these observed points contributes a scattering component proportional to the BRDFs of the observed points ( B k o | k [ 1 , N ˜ ] ), their coefficients of reflection Γ k o | k [ 1 , N ˜ ] , and the total path length R k o + R k j o | k [ 1 , N ˜ ] from the laser hit point to the lookin point.
Sensors 24 06522 g006
Figure 7. The panels show the likelihood envelope function L ( t ) , plotted on a log scale, for a 10 × 10 × 60 × 60 and a 45 × 45 × 60 × 60 sampling configuration, respectively. The likelihood envelope function L ( t ) is the uppermost solid black line. The color−coded lines illustrate the contributions to the total envelope of each feature of the room. Specifically, the walls, roof, and floor (blue), the box (purple), the pillar (orange), and the secondary bounce (red).
Figure 7. The panels show the likelihood envelope function L ( t ) , plotted on a log scale, for a 10 × 10 × 60 × 60 and a 45 × 45 × 60 × 60 sampling configuration, respectively. The likelihood envelope function L ( t ) is the uppermost solid black line. The color−coded lines illustrate the contributions to the total envelope of each feature of the room. Specifically, the walls, roof, and floor (blue), the box (purple), the pillar (orange), and the secondary bounce (red).
Sensors 24 06522 g007
Figure 8. The optimized secondary scattering mechanism. For each lookin point ( x j , y j , z j ) only the observed point with the largest contribution ( x k o , y k o , z k o ) is considered.
Figure 8. The optimized secondary scattering mechanism. For each lookin point ( x j , y j , z j ) only the observed point with the largest contribution ( x k o , y k o , z k o ) is considered.
Sensors 24 06522 g008
Figure 9. The panels show the likelihood envelope function L ( t ) for the optimized secondary scattering mechanism, plotted on a log scale, for a 10 × 10 × 60 × 60 and a 45 × 45 × 60 × 60 sampling configuration, respectively. The likelihood envelope function L ( t ) is the uppermost solid black line. The color−coded lines illustrate the contributions to the total envelope of each feature of the room. Specifically, the walls, roof, and floor (blue), the box (purple), the pillar (orange), and the secondary bounce (red).
Figure 9. The panels show the likelihood envelope function L ( t ) for the optimized secondary scattering mechanism, plotted on a log scale, for a 10 × 10 × 60 × 60 and a 45 × 45 × 60 × 60 sampling configuration, respectively. The likelihood envelope function L ( t ) is the uppermost solid black line. The color−coded lines illustrate the contributions to the total envelope of each feature of the room. Specifically, the walls, roof, and floor (blue), the box (purple), the pillar (orange), and the secondary bounce (red).
Sensors 24 06522 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Scholes, S.; Leach, J. Path Tracing-Inspired Modeling of Non-Line-of-Sight SPAD Data. Sensors 2024, 24, 6522. https://doi.org/10.3390/s24206522

AMA Style

Scholes S, Leach J. Path Tracing-Inspired Modeling of Non-Line-of-Sight SPAD Data. Sensors. 2024; 24(20):6522. https://doi.org/10.3390/s24206522

Chicago/Turabian Style

Scholes, Stirling, and Jonathan Leach. 2024. "Path Tracing-Inspired Modeling of Non-Line-of-Sight SPAD Data" Sensors 24, no. 20: 6522. https://doi.org/10.3390/s24206522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop