Figure 1.
The GNG topological map in the experimental dataset. As can be seen from the figure, when GNG learns a position and color, the connections appear very messy. (a) Original point cloud. (b) The GNG topology is learned using the position. (c) The GNG topology is learned using the position and color.
Figure 1.
The GNG topological map in the experimental dataset. As can be seen from the figure, when GNG learns a position and color, the connections appear very messy. (a) Original point cloud. (b) The GNG topology is learned using the position. (c) The GNG topology is learned using the position and color.
Figure 2.
Map building methods for path planning. (a) A real environment. (b) A grid map. (c) A polygonal map.
Figure 2.
Map building methods for path planning. (a) A real environment. (b) A grid map. (c) A polygonal map.
Figure 3.
Topological map building. (a) Environmental map. (b) Roadmap.
Figure 3.
Topological map building. (a) Environmental map. (b) Roadmap.
Figure 4.
An example of topological path planning in a polygonal map. (a) Visibility graph. (b) Voronoi diagram.
Figure 4.
An example of topological path planning in a polygonal map. (a) Visibility graph. (b) Voronoi diagram.
Figure 5.
The overall process of fast MS-DBL-GNG. The network is first initialized by creating multiple starting points in the point cloud. Then, based on the initialization, the point cloud data are rearranged and split into multi-scale mini-batches. For each mini-batch, it learns twice. During the learning process, it first resets the temporary variables and then learns the mini-batch in a batch matrix calculation manner. After learning is completed, the temporary variables are used to update the network node weights and edges. Then, it calculates the total number of nodes that should be added and, next, adds them to the network. The process is repeated until all multi-scale mini-batches are gone through.
Figure 5.
The overall process of fast MS-DBL-GNG. The network is first initialized by creating multiple starting points in the point cloud. Then, based on the initialization, the point cloud data are rearranged and split into multi-scale mini-batches. For each mini-batch, it learns twice. During the learning process, it first resets the temporary variables and then learns the mini-batch in a batch matrix calculation manner. After learning is completed, the temporary variables are used to update the network node weights and edges. Then, it calculates the total number of nodes that should be added and, next, adds them to the network. The process is repeated until all multi-scale mini-batches are gone through.
Figure 6.
An example of distributed initialization for three starting points. The circles are data, and the asterisks are nodes. First, a node is randomly selected in the last batch of data as the first starting point. Then, the third closest node is selected and connected. After that, the first B data surrounding it are deleted. The next starting point is selected in the area farthest from the current starting point. The same process is repeated until all three points are initialized.
Figure 6.
An example of distributed initialization for three starting points. The circles are data, and the asterisks are nodes. First, a node is randomly selected in the last batch of data as the first starting point. Then, the third closest node is selected and connected. After that, the first B data surrounding it are deleted. The next starting point is selected in the area farthest from the current starting point. The same process is repeated until all three points are initialized.
Figure 7.
The fast multi-scale batch-learning process. Data are learned from a small scale (left) to a full batch (right). However, this study avoid learning the full batch and instead learn the same mini-batch twice in each learning phase.
Figure 7.
The fast multi-scale batch-learning process. Data are learned from a small scale (left) to a full batch (right). However, this study avoid learning the full batch and instead learn the same mini-batch twice in each learning phase.
Figure 8.
The example procedure for balancing the data distribution in each mini-batch, where is 3, and L is 2. First, divide each set of data into groups and then rearrange the data to data .
Figure 8.
The example procedure for balancing the data distribution in each mini-batch, where is 3, and L is 2. First, divide each set of data into groups and then rearrange the data to data .
Figure 9.
The overall system architecture for automatic calibration using topological mapping. First, set up two Orbbec cameras in the environment to observe two different and partially overlapping areas. Then, extract RGB point clouds based on the intrinsic parameters, depth, and RGB color provided via the cameras. Use the proposed method, fast MS-DBL-GNG, to extract topological maps from each point cloud. These topological maps are then used to extract histogram features, followed by calibration using RANSAC and Color-ICP. Through calibration, extrinsic parameters are obtained and used to calibrate the point cloud to the world coordinate system.
Figure 9.
The overall system architecture for automatic calibration using topological mapping. First, set up two Orbbec cameras in the environment to observe two different and partially overlapping areas. Then, extract RGB point clouds based on the intrinsic parameters, depth, and RGB color provided via the cameras. Use the proposed method, fast MS-DBL-GNG, to extract topological maps from each point cloud. These topological maps are then used to extract histogram features, followed by calibration using RANSAC and Color-ICP. Through calibration, extrinsic parameters are obtained and used to calibrate the point cloud to the world coordinate system.
Figure 10.
The challenge of calibrating three or more point clouds is that the two selected point clouds do not have any overlapping areas. In addition, there is no camera arrangement ID between these cameras.
Figure 10.
The challenge of calibrating three or more point clouds is that the two selected point clouds do not have any overlapping areas. In addition, there is no camera arrangement ID between these cameras.
Figure 11.
Each point cloud is first merged with the best matching point cloud. Duplicate merges are removed. And then, the matching is performed again until all point clouds have been used.
Figure 11.
Each point cloud is first merged with the best matching point cloud. Duplicate merges are removed. And then, the matching is performed again until all point clouds have been used.
Figure 12.
Two different view setups used for the experiments.
Figure 12.
Two different view setups used for the experiments.
Figure 13.
Examples of photos taken from two view types. From left to right, the first two are view type 1, and the second two are view type 2.
Figure 13.
Examples of photos taken from two view types. From left to right, the first two are view type 1, and the second two are view type 2.
Figure 14.
Examples of point clouds taken from two view types. From left to right, the first two are view type 1, and the second two are view type 2.
Figure 14.
Examples of point clouds taken from two view types. From left to right, the first two are view type 1, and the second two are view type 2.
Figure 15.
The different learning phase results.
Figure 15.
The different learning phase results.
Figure 16.
Several examples of topological maps extracted from point clouds using fast MS-DBL-GNG. From left to right, the first two are view type 1, and the second two are view type 2.
Figure 16.
Several examples of topological maps extracted from point clouds using fast MS-DBL-GNG. From left to right, the first two are view type 1, and the second two are view type 2.
Figure 17.
The examples of calibrated point cloud results for view type 1 (left) and view type 2 (right).
Figure 17.
The examples of calibrated point cloud results for view type 1 (left) and view type 2 (right).
Figure 18.
Example point cloud for multi-camera calibration. All of these views are related from left to right or right to left.
Figure 18.
Example point cloud for multi-camera calibration. All of these views are related from left to right or right to left.
Figure 19.
The example of point clouds from four camera views calibrated using the proposed method.
Figure 19.
The example of point clouds from four camera views calibrated using the proposed method.
Figure 20.
Example of topological map usage for two calibration point clouds. It is easy to distinguish which ones are walkable through the topological map (the blue-colored topological map). From the walkable path, it can be seen that it does not cover the area close to the table, which is an advantage for the robot to navigate. This is a concept of intelligence sensors that provide the required information appropriately to the target. (a) Calibrated with two point clouds. (b) Merged from two topological maps. (c) Extracted walkable area of topological maps.
Figure 20.
Example of topological map usage for two calibration point clouds. It is easy to distinguish which ones are walkable through the topological map (the blue-colored topological map). From the walkable path, it can be seen that it does not cover the area close to the table, which is an advantage for the robot to navigate. This is a concept of intelligence sensors that provide the required information appropriately to the target. (a) Calibrated with two point clouds. (b) Merged from two topological maps. (c) Extracted walkable area of topological maps.
Table 1.
The main characteristics and differences of several topological mapping methods.
Table 1.
The main characteristics and differences of several topological mapping methods.
Method | SOM | GCS | GNG | SOINN |
---|
Topology Preservation | ✓ | ✓ | | |
Incremental Learning | | ✓ | ✓ | ✓ |
Topological Clustering | | | ✓ | ✓ |
Table 2.
The main feature differences between variance GNG methods.
Table 2.
The main feature differences between variance GNG methods.
Method | Standard GNG | FCM-BL-GNG [21] | MS-BL-GNG [22] | Fast MS-BL-GNG [19] | DBL-GNG [18] |
---|
Node initialization | Two random nodes | More than two random nodes | Two random nodes | Three random nodes | Distributed with more than two nodes |
Node growth frequency | One node per interval | One node per epoch | One node per mini-batch | One node per condition met (iteration) | Multiple nodes per epoch |
Data sampling | All data per epoch | All data per epoch | All data per scale | One mini-batch per scale | All data per epoch |
Batch learning strategy | n/a | One by one | One by one | One by one | Matrix calculation |
Table 3.
Main notations used in fast MS-DBL-GNG.
Table 3.
Main notations used in fast MS-DBL-GNG.
Notation | Description |
---|
M | The maximum number of nodes. |
m | The current number of network nodes. |
| The total number of starting points. |
L | The total learning phase. |
| All data in the point cloud. |
| A mini-batch of learning phase l. |
| The position features of the mini-batch in the learning stage l. |
| Data of index i. |
D | The total number of data. |
W | Network nodes. |
| The position features of network nodes. |
| k-th network node. |
| The error of node k. |
| The connection between node k and node j. |
| Batch size for learning phase l. |
| Accumulate weights to update network nodes. |
A | The number of activations for the node. |
| The temporary edge connection. |
| A small positive decimal number. |
| The node index list. |
| The learning rate of the winner node. |
| The learning rate of the winner node’s neighbors. |
| The error discount factor. |
Table 4.
The experimental results of different GNG methods.
Table 4.
The experimental results of different GNG methods.
Methods | Quantization Error | Computational Time |
---|
Standard GNG | 0.01696 ± 0.00194 | 40.41041 ± 4.22518 |
FCM-BL-GNG [21] | 0.01767 ± 0.00194 | 11,662.27850 ± 2798.33940 |
MS-BL-GNG [22] | 0.01742 ± 0.00225 | 220.92249 ± 8.02279 |
Fast MS-DBL-GNG1 | 0.02031 ± 0.00326 | 0.90484 ± 0.28579 |
Fast MS-DBL-GNG2 | 0.01299 ± 0.00144 | 2.34518 ± 0.43387 |
Fast MS-DBL-GNG3 | 0.01264 ± 0.00145 | 4.13767 ± 0.75200 |
Table 5.
Comparison of experimental results between the proposed method and the voxel and Octree methods. The results of fast MS-DBL-GNG were directly extracted from the point cloud without pre-processing through a voxel.
Table 5.
Comparison of experimental results between the proposed method and the voxel and Octree methods. The results of fast MS-DBL-GNG were directly extracted from the point cloud without pre-processing through a voxel.
Methods | Quantization Error | Computational Time |
---|
Voxel | 0.01357 ± 0.00001 | 0.01872 ± 0.00001 |
Octree | 0.02402 ± 0.00400 | 0.48581 ± 0.01760 |
Fast MS-DBL-GNG | 0.01145 ± 0.00131 | 49.88926 ± 0.79331 |
Table 6.
One-to-one calibration results (source to target). The calibration results show that using the topological maps extracted via fast MS-DBL-GNG are better and faster than using the voxel method alone and using other methods.
Table 6.
One-to-one calibration results (source to target). The calibration results show that using the topological maps extracted via fast MS-DBL-GNG are better and faster than using the voxel method alone and using other methods.
Distance Error |
---|
Method | Fast Global Registration [34] | Voxel | Fast MS-DBL-GNG
|
---|
View Type 1 | 0.45546 ± 0.15897 | 0.33185 ± 0.11623 | 0.23045 ± 0.12913 |
View Type 2 | 0.48632 ± 0.36297 | 0.43923 ± 0.36283 | 0.33663 ± 0.24264 |
Calibration Time (seconds) |
Method | Fast Global Registration [34] | Voxel | Fast MS-DBL-GNG |
View Type 1 | 1.30145 ± 0.45650 | 1.56605 ± 0.65369 | 0.26498 ± 0.34462 |
View Type 2 | 1.74838 ± 0.52938 | 2.03262 ± 0.84651 | 0.19595 ± 0.11501 |
Table 7.
Multi-camera calibration results (four cameras). The results show that multi-camera calibration using the topological map extracted via fast MS-DBL-GNG had the lowest distance error and a faster calibration speed than using the voxel approach alone.
Table 7.
Multi-camera calibration results (four cameras). The results show that multi-camera calibration using the topological map extracted via fast MS-DBL-GNG had the lowest distance error and a faster calibration speed than using the voxel approach alone.
Method | Fast Global Registration [34] | Voxel | Fast MS-DBL-GNG |
---|
Distance Error | 0.06328 ± 0.03121 | 0.09531 ± 0.11626 | 0.02779 ± 0.04742 |
Computational Time (Seconds) | 48.62213 ± 18.27764 | 1806.24649 ± 576.47094 | 135.37025 ± 20.09344 |