2.1. Background Theory
The ToF camera resembles a traditional digital camera, with a lens focusing light on an image sensor, which is a two-dimensional array of photosensitive pixels. However, unlike typical digital cameras, a ToF camera features an active light source to illuminate the scene. The camera captures reflected light from the scene, with each pixel in parallel calculating depth data to create a complete depth map. Light-emitting diodes (LEDs) are commonly used as the light source due to their rapid response time. Most commercial ToF sensors use Near-InfraRed (NIR) wavelengths, around 850 nm, which are invisible to the human eye and allow high reflectance across various materials without interfering with vision [
5].
Pixel implementations for image sensors vary according to the operational principle discussed further in the next section. Most current ToF systems are analog, using photodetectors and capacitors to collect and store charge from light pulses before converting it to a digital signal. Fully digital ToF cameras are also under development, using Single Photon Avalanche Diodes (SPADs) that can detect individual photons. Digital ToF systems help reduce noise linked to analog signals and the conversion process [
5]. SPADs are specialized photo detectors offering high sensitivity to low light, capable of detecting single photons. Operating in avalanche mode—biased above their breakdown voltage—SPADs amplify each detected photon into a cascade of charge carriers, creating a detectable current pulse. This amplification enables SPADs to sense extremely faint light levels, down to individual photons [
6].
The timing precision of SPADs is critical for applications like ToF sensors and Light Detection and Ranging (LiDAR), where accurate distance calculations depend on detecting light pulses. By measuring the time delay between a photon’s emission and its return after reflecting from an object, SPADs calculate distances with high temporal resolution. This capability is especially valuable in robotics, 3D scanning, and automotive sensing, where detailed depth information is crucial. However, SPADs have some limitations, such as after pulsing, where a SPAD remains sensitive after detecting a photon, potentially causing false counts, and dead time, which limits the detection rate, especially in highly lit or rapid-detection environments [
5,
6].
The fundamental operating principle of ToF cameras involves illuminating a scene with a modulated light source and measuring the light that reflects to the sensor. Since the speed of light is constant, the distance to the object from which the light was reflected can be determined by calculating the time difference between the emitted and returning light signals. ToF cameras use two primary illumination techniques: pulsed light and Continuous Wave (CW) modulation.
In pulsed light ToF, short bursts of light are emitted, and the time taken for the light to return is measured to calculate distance. In CW modulation, the light source emits a continuous, sinusoidally modulated wave, and the phase shift between the emitted and received light is used to compute the depth information.
In the pulsed method, depth measurement is straightforward. The illumination unit quickly switches on and off, creating short pulses of light. A timer is activated when the light pulse is emitted, and stops when the reflected light is detected by the sensor. The distance,
d, to the object, is then calculated based on the elapsed time, as described in [
7].
where ∆
t is the round-trip time of the light pulse and
c is the speed of light. However, the ambient illumination usually contains the same wavelengths as the light source of the ToF camera. The light captured by the camera consists of both emitted light and ambient light. This mixture can cause inaccuracies in calculating distances. To address this, a measurement is taken when the illumination unit is switched off, allowing the ambient background to be subtracted from the overall signal. This adjustment is managed by using the outgoing light signal as a control for the sensor detector.
Additionally, each short light pulse contains a relatively low amount of energy [
6], and due to imperfections in the system components [
8], the signal received is prone to noise. To improve the signal-to-noise ratio (SNR), multiple cycles of these pulses—often millions—are recorded over a specific period. The final depth information is derived from the average of these cycles. This recording period is known as the Integration Time (IT) [
9].
Figure 1 illustrates the concept of this pulsed modulation method. By using an integration time interval of ∆
t and two sampling windows (C
1 and C
2) that are out of phase, the averaged distance can be calculated [
9].
where
Q1 and
Q2 are the accumulated electrical charges received over the integration time.
The depth resolution achievable with the pulsed method is constrained by the speed of the camera’s electronics. Based on Equation (1), achieving a depth resolution of 1 mm would require a light pulse lasting approximately 6.6 picoseconds. However, the rise and fall times, as well as the repetition rates of current LEDs and laser diodes, impose practical limitations on generating such short pulses [
8]. Moreover, reaching these speeds in the receiver circuit is challenging with today’s silicon-based technology, especially at room temperature [
9].
In the Continuous Wave (CW) method, instead of directly measuring the round-trip time of a light pulse, the CW modulation method determines the phase difference between the sent and received signals. In this approach, the light source is modulated by adjusting its input current, creating a waveform signal [
8]. While various modulation shapes can be used, square or sinusoidal waves are the most common [
10]. The CW modulation technique reduces the requirements for the light source, enabling a finer depth resolution than is possible with pulsed light.
There are multiple ways to demodulate the received signal and extract its amplitude and phase information. A traditional approach involves calculating the cross-correlation function between the original modulation signal and the returned signal [
10]. This cross-correlation can be obtained by measuring the returned signal at specific phases, which can be implemented using mixers and low-pass filters in the detector. A more efficient alternative is synchronous sampling, where the modulated returned light is sampled simultaneously with a reference signal at four different phases (0°, 90°, 180°, and 270°), as shown in
Figure 2 [
10]. This synchronous sampling approach simplifies the circuit design and reduces pixel size, allowing for more pixels on a sensor and thus higher resolution.
Like the pulsed method, multiple samples are recorded and averaged to improve the signal-to-noise ratio (SNR). By using four equally spaced sampling windows (
Q1 to
Q4), timed by the reference signal (see
Figure 2), the received signal is sampled at different phases over the integration time. Assuming a sinusoidal modulation signal with no harmonic frequencies, Discrete Fourier Transform (DFT) equations can be applied to calculate the phase
ϕ, amplitude
A, and offset
B as follows [
9,
10]:
From the phase, the distance can be finally calculated as [
9]:
The intensity, i.e., amplitude
A, of the light decreases proportionally to the traveled distance in a known way. Hence, the received amplitude value from (4) can be used as a confidence measure for the distance measurement. Additionally, the reflected signal is often superimposed with background illumination, which causes errors in the measurement. Thus, the offset (5) is used to distinguish the modulated light component from the background light [
10].
When calculating distances from the phase difference as in (6), one important thing must be considered. Since the modulation signal is periodical, its phase wraps around every phase.
2.2. Relevant Research and Studies
According to [
12], it proposes IoT-based platforms addressing urban waste collection optimization. Here, the idealization resorts to maximizing the minimizing number of trips made by collecting vehicles for collecting toxins. Algorithms are based on knapsack methodologies. The model was embedded in MATLAB 24.1, and the output derived was a 47 percent improvement in the collection of highly toxic waste as compared to other conventional procedures. Through waste transportation optimization techniques, the waste collection points are established in the cities and are being outfitted with IoT sensors. These IoT sensors can monitor the waste volume and its state of toxicity. The containers are further classified into three groups according to toxicity: high, medium, and normal. The IoT sensors—ultrasonic and gas by default—will collect data on how full a container is and a toxicity level of 2 in the waste. The authors employ the 0/1 knapsack algorithm to optimize the collection trucks loading, so that maximized capacity is ensured, with priority for highly toxic waste being loaded. This model was evaluated in comparison with three traditional methods: First Bin First (FBF)—Collection based on location; Largest Bin First (LBF)—Fullest containers prioritized; and Longest Delay (LD)—Containers that have waited the longest prioritized. In a simulation environment, the following results were presented by the authors: 47% improvement over traditional methods in highly toxic waste collection; fewer trips made by a collection truck; reduced operating costs and fuel consumption; speedier disposal of toxic materials in the optimization of priority waste collection; and a cost—benefit analysis claiming that the system could recover the original investment in sensors in less than one year.
One paper [
13] examines the cyber and physical threats affecting Smart Waste Management Systems (SWMSs). The authors mention that the security of SWMSs largely entails securing their communication only while ignoring the vulnerabilities within their physical components and operational infrastructure. The authors introduced a holistic security model aimed at countering cyber and physical attacks. The methodology presented in the article includes observing existent SWMS implementations, their components, architectures, and protocols. With respect to the STRIDE model of identifying vulnerabilities in various layers of SWMSs (spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege), an analysis was done on the assessment of possible attacks and their implications covering sensor and communication failures, storage infrastructure, and surveillance. Analysis showed that the SWMSs have adopted promising technologies, starting from sensors measuring the filling level (ultrasonic sensors) to gas and temperature detection devices. The communication networks used for comparison are Long Range Wide Area Network (LoRaWAN), ZigBee, Wi-Fi, and GSM, each with its vulnerabilities. The anomalies in the containers are detected through surveillance cameras monitoring them. All the data are being managed in the cloud, entailing remote access and predictive analysis. Some of the major findings include vulnerabilities in IoT sensors that allow various attacks like spoofing data and falsifying container status; possible communication failures, including Denial of Service (DoS) attacks that can cause damage to data collection activity and operation of SWMSs; threats to user privacy where possible tracking and profiling can be carried out based on the data being collected from smart containers; weaknesses in actuator protection making it possible for different attacks such as blocking the opening of containers or tampering with waste compactors; and security recommendations including strong encryption, strong authentication, and physical protection against device tampering.
Another study [
14] presents an account of the influence pertaining to robotics on intelligent urban waste management, with a focus on automation in collection, sorting, recycling, and disposal. The paper reviews existing and emerging technological solutions while examining the integration of robotics, IoT sensors, and AI to enhance operational effectiveness and reduce environmental impacts. Applications such as robotic compactors, autonomous recycling vehicles, and drone monitoring systems are considered. The authors of the paper use robotic technologies for automatic waste collection, including autonomous vehicles integrated with computer vision and AI and ultrasound sensors for monitoring the fill levels of containers and waste categorization, and robotic compactors to maximize waste volume. Drones are used in waste monitoring to detect illegal dumping and optimize collection routes. The study concludes that integrating robotics into urban waste management enhances the efficiency of operations, decreases operational costs, and reduces human exposure to hazardous environments. AI helps optimize collection routes and improve the separation of recyclable material, thus enhancing recycling rates and minimizing waste quantities in landfills. Nonetheless, further studies are required to assess the technology’s economic viability and its potential long-term environmental impact.
The systematic review present in article [
15] deals with smart containers implemented for sustainability in urban waste management. Detection and actuation technologies used in those systems were analyzed for their capability of waste segregation. The work has been developed in line with the Systematic Literature Review (SLR) protocol based on the PRISMA methodology with regards to transparency and replicability. The process includes three main stages, such as definition of the need for the study, formulation of the research questions, and selection of the IEEE Xplore, Scopus, and ACM databases; application of inclusion/exclusion criteria and data extraction using reference software; narrative and thematic analysis of the extracted data.
The study identifies the technologies that are used in smart containers, e.g., the most used sensors are “Filling level” (84%)—Predominantly ultrasonic voltage detectors; “Gas detection” (18%)—Gases, including CO2 and methane, have been monitored; “Environmental Sensors” (19%)—Measurement of temperature, humidity, and pressure; “Weight sensors” (15%)—Use of weight cells for assessing the weight of waste; “Computer vision” (23%)—Neural network algorithms for classifying waste; “RFID” (3%)—Identifying the waste or users.
Regarding actuators, the report states that there are lid control (28%); automatic motorized mechanisms to work without contact for routing wastes (34%); and automatic rotary and gravity mechanisms for separation and waste compaction (6%) for increased capacity storage. As to the remarks, however, the study found out that there are very few widely adopted detection mechanisms for smart containers. On the contrary, waste classification is still focused on computer vision, having little exploration into composition identification methods. Most automatic waste separation solutions are still very basic and lack standardization. But the report recommends conducting market studies and cost–benefit analysis to set up new high-end sensors for sorting and monitoring at the container level. This last bolsters the requirement and importance of our work with the ToF sensor being specified.
Article [
16] provides a comparative analysis of ToF and LiDAR sensors specifically for indoor mapping applications. The primary objective of the study is to assess the accuracy, efficiency, and suitability of ToF sensors in relation to conventional LiDAR systems for indoor use. The research methodology involved conducting a series of experimental tests within indoor settings, utilizing both sensor types to gather spatial data. Key parameters evaluated included the accuracy of distance measurements, spatial resolution, response time, and the capability to detect objects under varying environmental conditions. Findings indicated that LiDAR sensors deliver superior accuracy in distance measurement, with an average error margin of approximately 2 cm, whereas ToF sensors exhibited an average error of about 5 cm. Nonetheless, ToF sensors were noted for their cost-effectiveness and lower energy consumption, rendering them a viable option for applications where utmost precision is not essential. Furthermore, ToF sensors proved to be more effective in identifying objects in highly reflective environments, a scenario where LiDAR systems may encounter challenges due to signal saturation. In conclusion, the decision to utilize either ToF or LiDAR sensors for indoor mapping should consider the necessary accuracy, associated costs, and the specific characteristics of the application environment.
Table 1 summarizes the relevant research discussed previously.
In this study, we aim to demonstrate that while some of these solutions may offer accuracy comparable to our ToF-Node (e.g., LiDAR) or similar features and optimization capabilities (e.g., RFID or IoT-based systems), our system provides a balanced combination of accuracy, cost-effectiveness, and easy implementation. This makes ToF sensors a strong contender for waste bin monitoring applications.
Based on the related works that have been analyzed, we conclude that, while LiDAR offers the highest accuracy and range, it is more expensive and complex. So, it is not a good solution for this type of application. RFID and IoT-based systems provide advanced features and optimization capabilities (like our system) but tend to be more costly and complex to implement. ToF sensors, being inherently low-power (without the need for complex algorithms to achieve this), offer a practical and efficient solution for monitoring trash bins. Their high accuracy ensures precise fill level measurements, and their low power consumption makes them more autonomous, which is essential for this application, especially in scenarios where cost and simplicity are key considerations.
Our research contributes to the field in several significant ways that are not covered by the works referenced previously, namely:
Precision and Accuracy
Three-Dimensional Visualization
Future Potential with Blockchain
Cost-Effectiveness and Scalability
Real-Time Monitoring and Reliability
In
Section 4.4, we will further elaborate on this topic.