1. Introduction
Monitoring a robot’s environment from the surface of the robot has been gaining traction in recent years. This offers insight into the robot’s surroundings for better motion planning in unstructured environments as well as improved workplace safety, offered by the more nuanced knowledge of the robot’s surroundings. Traditionally, robots were enclosed into work cells with physical barriers. Whenever a human operator would enter the work cell, robots inside it would cease operation until the operator vacated the area. Although this approach was inherently safe, it was neither space-efficient nor did it allow for any kind of human–robot collaboration (HRC). The first problem was addressed by utilizing linear lidar scanners, in accordance to the readings of which, only robots in the proximity of an operator would slow down or stop. While being a big step towards human–robot collaboration, the approach with lidar scanners still leaves a lot to be desired. For a true HRC, robots and human operators have to be able to safely move even when they are in close proximity, for which the methods mentioned above do not allow. A solution that has found its way into the industrial environment is to equip a robot with force sensors. This way, the robot can stop on collision before harming the human operator. Such robots are known as collaborative robots. Although they enable HRC, other techniques are being investigated for their benefits. In the following paragraphs, we delve into specific approaches, such as monitoring the robot’s surroundings with depth cameras or on-robot time-of-flight sensors, to highlight their potential in achieving nuanced speed and separation monitoring for enhanced human–robot collaboration.
Preventing robots and operators from moving in close proximity is enabled by speed and separation monitoring (SSM) [
1]. A more nuanced SSM, where the actual distance between the robot and its surroundings is known, allows for motion planning and obstacle avoidance. One way to achieve this is to observe the work cell with depth cameras [
2,
3], but this approach is prone to missing details due to view occlusions, a problem alleviated by using multiple depth cameras observing the same scene from multiple perspectives [
4,
5]. Alternatively, the robot’s surroundings can be observed from the robots’ surface by mounting time-of-flight (ToF) sensors onto the robot. Additional benefits are derived from combining such an approach with previously mentioned stationary depth cameras [
6,
7]. An approach that is gaining traction is to omit the expensive depth cameras and monitor the robot’s surroundings with individual cheaper depth sensors that are distributed across the robot’s surface [
8,
9]. This approach suffers from the inability to detect objects in very close proximity to the sensors and, as a result, has to be augmented with additional close proximity sensors, such as capacitive, tactile sensors [
10,
11]. Because sensors with overlapping monitored areas may interfere, the sensor position and quantity has to be considered to minimize blind spots while maximizing the measurement rate as well [
12]. Readings from on-robot depth sensors may be used for implementing a more nuanced SSM, or even to aid in obstacle avoidance and path planning [
13,
14]. Another thing to consider is self-detection—a situation where the ToF sensor detects a segment of the robot on which it is mounted. This problem can be solved by either simulating the expected measurements in an empty room or by calculating the expected values [
15]. In either case, the exact mounting positions of the sensors has to be known. An alternative to precise mounting or measurement may be to integrate inertial measurement units into the sensor boards and execute a calibration procedure that locates the sensor boards based on the robot’s movement [
16]. All the references cited so far have had the observed object in their direct line of sight.
Depth cameras and ToF sensors have been demonstrated to work using a mirror to redirect light. Examples include using stationary mirrors to simulate observing an object with multiple 3D cameras [
17] and using mirrors, mounted onto a robot, to expand the field of view (FOV) of a scanning lidar [
18,
19,
20]. Furthermore, we explored the effect of mirrors on the measurement accuracy and precision in our last article [
21]. With this information, a method of distributing FOVs using mirrors can be proposed.
Monitoring the robot’s surroundings from multiple points of view scattered across the robot’s surface offers unique benefits when compared to monitoring the robot and its surroundings form an external point of view. Most importantly, this approach provides information about the surroundings from a plethora of perspectives, making one less susceptible to missing details when a small number of sensors become obstructed or fail. Although a number of previously mentioned studies [
8,
9,
10,
11,
12] have tackled the problem in such a way, none have achieved the measurement throughput needed for use in an actual safety application. Using a centralized lidar system that can interleave measurements on individual channels for a higher measurement throughput has been identified as a possible solution. Furthermore, a way of distributing the FOVs of a centralized multi-channel lidar using stationary mirrors to mimic the effect of mounting discrete ToF sensors across the robot’s segments is proposed. As a step towards the actual implementation of such technology, this article presents the modular lidar system that was developed for the purpose of investigating the possible problems that may arise when multiple nearby FOVs are redirected using mirrors, and the results of that investigation.
Beyond
Section 1, the introduction, this paper follows an organized structure:
Section 2 discusses the underlying principles, measurement equipment, and the setup of the lidar system developed for the experiments, providing comprehensive details of its design. In
Section 3, we present the experimental results, conveyed through the demonstration of various measurement configurations. The interpretation of these results is expounded in
Section 4. Finally,
Section 5 concludes the paper and suggests potential avenues for future research.
3. Results
This section contains the evaluation of the results for the modular lidar devices, first with a direct beam of light on the obstacle to be measured, followed by configurations with a light beam passing over one fixed mirror, and then combinations with two and three channels. An illustration of the light paths, based on a raytracing optical simulation, is provided at the end. For clarity, lidar channels one, two, and three correspond to the first, second, and third daughter boards of the main board, or, based on their position, as left, middle, or right, as shown in
Figure 9a. The direct light beam configuration is shown with and without the walk error compensation. The configurations where the light was first redirected with a mirror include mirrors present on channels one, two, and three individually, then on all three channels simultaneously, and on pairs of channels one and two, two and three, and one and three. The results are presented in the form of graphs of the average measurement error at each specified distance between the lidar system and the fixed mirror(s).
Despite the fact that the measurement results were examined for different configurations, the resulting values were found to have a standard deviation of about 12 mm over the entire calibrated measurement range, and sometimes reaching up to 15 mm beyond it, with possible algorithmic improvement, as discussed in
Section 4. Measuring the direct distance to the target without walk error compensation results in an increasing positive error. This is shown in
Figure 10a, where the average uncompensated measurement error is plotted against the set distance. The blue, orange, and red traces correspond to channels one, two, and three, respectively, and all the axes are in mm. Using walk error compensation, as presented in the Methods section, lowers the measurement error to that shown in
Figure 10b, where the average compensated measurement error is plotted against the set distance with solid lines. As in
Figure 10a, the blue, orange, and red traces correspond to channels one, two, and three, respectively, and both axes are in mm. The standard deviation of the compensated measurements was also calculated and is shown in
Figure 10c. The solid lines represent the raw standard deviation of the measurements with walk error compensation, and the dashed lines represent the standard deviation of the measurements with walk error compensation after further filtering with the running average of eight samples. The standard deviation is plotted against the set distance. Again, the blue, orange, and red curves correspond to channels one, two and three, respectively; both axes are in mm.
Experiments with ranging through mirrors were performed in various configurations. All combinations from one to three mirrors were tested, and the results are shown in the following diagrams. In the case of the mirror measurements, it should be noted that there are two distances involved; one from the lidar system to the mirror, and one from the mirror to the target object. The two distances combined are referred to as the set distance. Throughout the upcoming figures, each contiguous trace represents measurements from a setup where the distance between the lidar and the mirror was constant and each plot contains multiple traces at different distances between the lidar system and the mirror, as described in more detail in
Section 2.3. The traces are labeled by their starting distance, which is the minimal set distance in each setup. In
Figure 11,
Figure 12,
Figure 13 and
Figure 14, the average measurement error of the walk error-compensated measurements, in mm, is plotted against the set distance, also in mm. Both the vertical and horizontal axes are kept constant throughout the figures for easier comparison. To enhance clarity, each graph is labeled in the top left corner. The digits to the left of the hyphen indicate which mirrors were present during the measurement, while the digit on the right corresponds to the lidar channel to which the plotted measurement errors belong.
The ranging performance with each channel measured through a mirror with only one mirror present is shown in
Figure 11 in the left column. Diagrams (a), (c), and (e) refer to the first, second, and third channel, respectively. In this setup, only one of the three mirrors was present at a time, and it redirected only the light from the corresponding channel. This serves as a reference for what measurements through a mirror should look like without any interference. The right column of
Figure 11 shows the average measurement error for the setup with all three mirrors present. Plots (b), (d), and (f) correspond to the first, second, and third channel, respectively. The plots correspond to the data where the transmitter and receiver were on the same daughter board. It can be seen that the measurement error for the setups with only one mirror was relatively constant and no significant patterns can be observed, apart from the excessive measurement error outside of the calibration range, as at the beginning in
Figure 11e, and less notably, in
Figure 11c. The setups with multiple mirrors, however, show notable patterns. The measurement error trends towards negative on channel one, makes positive humps on channel three and shows a combination of the two effects on channel two.
To isolate the effect of the neighboring mirrors, another experiment was conducted with only two mirrors. One was in the redirecting of the transmitted light on channel one, and the other on channel two. The results are shown
Figure 12. The average measurement error of the walk error-compensated measurements is plotted against the set distance. The different colors represent the different starting distances between the lidar and the mirrors as described above. The average measurement error on channel one shows a negative trend, and positive humps are present on channel two. Those appear gradually and disappear quickly.
Figure 12a,b show data for channels one and two, respectively.
Another pair of mirrors tested was a configuration with only the second and third mirror. This configuration is similar to the previous one; therefore, the results were expected to show the same trends. That is confirmed by
Figure 13, which, just as before, shows the average measurement error as a function of the set distance at various starting distances.
Figure 13a shows the data for channel one and
Figure 13b shows the data for channel three.
There is a big similarity between the setups with mirrors present only on channels one and two, and two and three. This is confirmed by the similarity between
Figure 12 and
Figure 13. They illustrate the interference contribution from the mirror directly to the left or right from the monitored channel. To determine the effect of a mirror one position away from the monitored channel, a setup with mirrors present only on channels one and three was tested as well. The measurement errors for this setup on channels one and three are shown in
Figure 14a,b, respectively. Again, the average measurement error is plotted against the set distance; the different traces correspond to the different minimum distances between the lidar and the mirrors. The results are very similar to those from the tests with only one mirror present.
Some conclusions could be drawn simply by analyzing the obtained results in
Figure 10,
Figure 11,
Figure 12,
Figure 13 and
Figure 14, but a more reliable explanation can be obtained by also considering a raytracing simulation [
28]. A two-dimensional top-down simulation with a simplified setup with three mirrors was used. The receiver was placed where the photodiode of the middle lidar board would be, and a point light source was positioned where the target would be illuminated. This source was moved along the axis that the target was moved in the physical experiments. An illustration of the light paths from the target (vertical black line) at different distances is shown in
Figure 15. It can be seen that, depending on the set distance (a–g), the reflected light is coupled to the receiver through different light paths. The light path through the central mirror, which is the one through which the light pulse is transmitted, is always present, but the light path through the left and right mirror is only present on some set distances.
In the simulation, when the target is very close to the mirrors (a), only one possible light path exists, and that is straight back through the same mirror that the light was sent through initially. With the target a little farther (b), the light can also be coupled through a mirror on the right. The reflection initially appears at the farther edge of the mirror and slowly moves towards the closer one with an increasing target’s distance. At some point (f), only partial reflection can be obtained through the mirror on the right and therefore, the contribution of this light path starts to decrease before fully disappearing (g). This light path is present on a wide range of target positions.
The light path through the mirror on the left is present on a more narrow range. Before it is partially established (c), the left mirror is occluded by the front side of the middle one. With the distance to the target increasing, this light path starts getting obstructed by the back of the mirror substrate (e) before becoming completely cut off a little farther (f). Both the slow onset of this light path and its disappearance can be seen in
Figure 11,
Figure 12 and
Figure 13. It has to be noted that the exact distances when the different light paths become established or obstructed depend greatly on the exact mirror dimensions, spacing, laser beam size, receiving lens diameter, and the receiver active area, as well as the distance between the lidar and mirrors.
4. Discussion
In this section, the measurement results are summarized, and an explanation is provided for the patterns observed in the measurement error plots. The explanation is supported by evidence from the light path simulations. Before delving into these details, a comparison is made between the lidar’s ranging performance and that of the commercially available alternatives.
In direct ranging, and even when ranging through one clean mirror, the presented lidar’s ranging measurement error is safely within the ±1 cm range, and the measurement’s standard deviation was measured to be around 12 mm, as long as the set distance is within the calibrated range. That is with nothing but walk error correction. The theoretical maximum throughput of the developed lidar ranging system is 10 kHz, split among all channels, which was limited to 500 Hz due to the limitations of the current implementation. Using a faster microcontroller and better optimized code would allow for the use of the full sample rate, where averaging multiple samples or utilizing the running average would be very feasible. A running average of four samples effectively halves the measurement’s standard deviation, while using eight samples drops it safely below 4 mm, as seen in
Figure 10c. According to an independent evaluation of Microsoft Kinect 2.0 [
29], our lidar’s accuracy is comparable to Microsoft Kinect 2.0 but the Kinect has better precision. Since our design allows for a much higher sample rate, the precision could be matched by utilizing the running average. Based on the intended use, the presented lidar is more comparable to the Vl53L1x lidar sensor from STM, which is often used for monitoring the robot’s surroundings from the surface of the robot. Both have a similar measurement error but Vl53L1x has a slightly better measurement noise [
30]. The latter could be, once again, significantly improved by utilizing the running average, which would make our sensor better than the STM’s offering.
In an ideal case, where both the field of view and field of illumination were infinitesimally narrow, and would perfectly overlap on each channel, the measurement results for the configurations with one or more mirrors should be indistinguishable from one another. Achieving such a light path is expensive to manufacture and demands a bulky optics setup. As such, differences between the configurations with one or more mirrors may be observed in some conditions. Errors that may be expected in various configurations depend on the ToF measurement technology. The device developed for the experiments in this article is a pulsed lidar with a 50 ns pulse width and time-over-threshold walk error compensation. Due to its wide pulse, it cannot discern individual reflections from separate light paths or off individual reflecting surfaces, such as the target and grime or dust on the mirrors, as individual reflections would overlap. The mirrors were thoroughly cleaned before collecting the measurements; therefore, reflections off dirt and grime are not present, as confirmed by the results for ranging with only one mirror. From this observation, it is clear that multi-path reflection is the main contributor to the measurement error.
In
Figure 11,
Figure 12,
Figure 13 and
Figure 14, certain patterns can be observed. Firstly, the mirrors only affect the ranging performance when the lidar channels are physically close to one another. Secondly, ranging is slightly affected by the mirror on the right in the form of the measurements showing the target closer than the actual distance. Thirdly, the mirror on the left notably affects the channel on the right.
Based on the measurement results and the simulations, illustrated in
Figure 15, the effect of the left and right mirrors can be summarized. The right mirror contributes to an increasingly negative measurement error. Light has to travel a shorter path through the right mirror but it enters the receiving optics at a greater angle. This results in a low amplitude and slightly early reflection. Combining with the main reflection, the resulting pulse is slightly early and slightly wider than it should be. The earliness causes negative measurement error, while pulse widening causes the undercompensation of the walk error and therefore, positive measurement error. The two effects oppose each other but clearly do not cancel out. With increasing distances between the lidar and mirror, and the mirror and target, the incidence angle of the light reflecting through the mirror on the right decreases. This increases the light gain and the total amount of light that hits the receiver; thus, the error this light path brings increases as well. According to the results presented in
Figure 11b,d,
Figure 12a and
Figure 13a, the effects of the mirror on the right become notable when the target is 80 cm from the lidar, measuring through the primary light path. The effects remain present throughout the rest of the measurement range.
The mirror on the left interferes with the measurements in slightly different way. As seen in
Figure 15, the undesired light path through the neighboring mirror is longer than the primary path, making pulse widening its primary effect. This results in walk error undercompensation, which makes the measurements farther than expected. At least in some lidar and mirror configurations, this effect is only present on a narrow range. The error slowly increases in magnitude before possibly reaching a plateau and then quickly disappearing. Judging by the simulations, this error should disappear completely at some distance, even though in some of our measurements it did not. This can be attributed to the limited measurement range. With an increasing distance, the incidence angle decreases, slowly increasing both the gain and total light coupling into the receiver. At a certain angle, however, the reflected light starts being occluded by the back side of the primary mirror. Because the illuminating beam is very narrow, the distance between the reflection being slightly and fully occluded is rather small, which explains the sharp drop in the measurement error.
When the primary light path is surrounded by mirrors both on the left and on the right, the measurements are affected by both stray reflections. Both positive measurement error humps and a negative measurement error trend is present, as it can be seen in
Figure 11d.