Figure 1.
Scheme of the proposed views network planning algorithm.
Figure 1.
Scheme of the proposed views network planning algorithm.
Figure 2.
Example of the sphere with rendered texture used for final validation: (a) RGB noise (random colours), (b) colour triangles, (c) stained glass, (d) random colour line segments, and (e) random black points.
Figure 2.
Example of the sphere with rendered texture used for final validation: (a) RGB noise (random colours), (b) colour triangles, (c) stained glass, (d) random colour line segments, and (e) random black points.
Figure 3.
Example (a) sphere reconstruction from a two-camera scenario; (b) the results of density computation (points/mm2); (c) accuracy evaluation (mm).
Figure 3.
Example (a) sphere reconstruction from a two-camera scenario; (b) the results of density computation (points/mm2); (c) accuracy evaluation (mm).
Figure 4.
Example of a (a) 12-camera setup with the reconstructed sphere; (b) 360-degree scene with camera distribution view with columns; (c) print screen from 3dsmax of the same 360-degree scene.
Figure 4.
Example of a (a) 12-camera setup with the reconstructed sphere; (b) 360-degree scene with camera distribution view with columns; (c) print screen from 3dsmax of the same 360-degree scene.
Figure 5.
The example scene volumes: (a) measurement volume is presented as a black cuboid and permitted volume as segments for cameras to be mounted on as described in this paper. (b) Example of an angular range within which predictions are calculated (45–135 degrees in the polar direction, 0–360 degrees in azimuthal direction).
Figure 5.
The example scene volumes: (a) measurement volume is presented as a black cuboid and permitted volume as segments for cameras to be mounted on as described in this paper. (b) Example of an angular range within which predictions are calculated (45–135 degrees in the polar direction, 0–360 degrees in azimuthal direction).
Figure 6.
Example of (a) discretised measurement volume and (b) normal directions of single points for calculating predictions.
Figure 6.
Example of (a) discretised measurement volume and (b) normal directions of single points for calculating predictions.
Figure 7.
An example scene with camera setup and discretised measurement volume with the number of observing cameras presented in the colour scale.
Figure 7.
An example scene with camera setup and discretised measurement volume with the number of observing cameras presented in the colour scale.
Figure 8.
Reconstruction coverage on two camera scenarios with the sphere on the Z-axis at 2-m distances: (a) evaluations, (b) predictions, (c) differences. Red points represent areas where the sphere was reconstructed for evaluations and predictions and for differences where the evaluations differed from predictions. Blue points show otherwise.
Figure 8.
Reconstruction coverage on two camera scenarios with the sphere on the Z-axis at 2-m distances: (a) evaluations, (b) predictions, (c) differences. Red points represent areas where the sphere was reconstructed for evaluations and predictions and for differences where the evaluations differed from predictions. Blue points show otherwise.
Figure 9.
Reconstruction coverage on six-camera scenario with the sphere on the Z-axis at 2-m distance: (a) evaluations, (b) predictions, (c) differences. Red points represent areas where the sphere was reconstructed for evaluations and predictions and for differences where the evaluations differed from predictions. Blue points show otherwise.
Figure 9.
Reconstruction coverage on six-camera scenario with the sphere on the Z-axis at 2-m distance: (a) evaluations, (b) predictions, (c) differences. Red points represent areas where the sphere was reconstructed for evaluations and predictions and for differences where the evaluations differed from predictions. Blue points show otherwise.
Figure 10.
Density evaluation of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios.
Figure 10.
Density evaluation of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios.
Figure 11.
Density predictions of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios.
Figure 11.
Density predictions of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios.
Figure 12.
The relative difference between the predictions and evaluations of the density of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 30% are marked in black.
Figure 12.
The relative difference between the predictions and evaluations of the density of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 30% are marked in black.
Figure 13.
Evaluation of the accuracy of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 0.2 mm for 2 m, 0.25 mm for 3 m, 0.35 mm for 4 m and 0.4 mm for 5 m, respectively, are marked in black.
Figure 13.
Evaluation of the accuracy of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 0.2 mm for 2 m, 0.25 mm for 3 m, 0.35 mm for 4 m and 0.4 mm for 5 m, respectively, are marked in black.
Figure 14.
Predicted accuracy of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values that are higher 0.2 mm for 2 m, 0.25 mm for 3 m, 0.35 mm for 4 m and 0.4 mm for 5 m, respectively, are marked in black.
Figure 14.
Predicted accuracy of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values that are higher 0.2 mm for 2 m, 0.25 mm for 3 m, 0.35 mm for 4 m and 0.4 mm for 5 m, respectively, are marked in black.
Figure 15.
The relative difference between the accuracy predictions and evaluations of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 100% are marked as black.
Figure 15.
The relative difference between the accuracy predictions and evaluations of the reconstructed sphere at the following distances: (a) 2 m, (b) 3 m, (c) 4 m, and (d) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 100% are marked as black.
Figure 16.
Average 360-degree scene with (a) 20-camera distribution and 5 reconstructed spheres in the measurement volume with (b) density and (c) accuracy evaluations. Values that are higher than 3.0 in (b) and 0.5 in (c) are marked in black.
Figure 16.
Average 360-degree scene with (a) 20-camera distribution and 5 reconstructed spheres in the measurement volume with (b) density and (c) accuracy evaluations. Values that are higher than 3.0 in (b) and 0.5 in (c) are marked in black.
Figure 17.
Quasi-optimal 360-degree scene with (a) 20-camera distribution and 5 reconstructed spheres within the measurement volume with (b) density and (c) accuracy evaluations. Values that are higher than 3.0 in (b) and 0.5 in (c) are marked in black.
Figure 17.
Quasi-optimal 360-degree scene with (a) 20-camera distribution and 5 reconstructed spheres within the measurement volume with (b) density and (c) accuracy evaluations. Values that are higher than 3.0 in (b) and 0.5 in (c) are marked in black.
Figure 18.
The measurement system used for 3D reconstruction of real data: (a) camera setup with measurement region (black cuboid) and example human body reconstruction, (b) image captured by an example camera using the measurement system.
Figure 18.
The measurement system used for 3D reconstruction of real data: (a) camera setup with measurement region (black cuboid) and example human body reconstruction, (b) image captured by an example camera using the measurement system.
Figure 19.
3D reconstructions of the human in different poses: (a) pose 1, (b) pose 2, (c) pose 3, (d) pose 4, with corresponding unoccluded sparse point clouds (e–h).
Figure 19.
3D reconstructions of the human in different poses: (a) pose 1, (b) pose 2, (c) pose 3, (d) pose 4, with corresponding unoccluded sparse point clouds (e–h).
Figure 20.
Sparse point clouds of the subject in different poses: (a) pose 1, (b) pose 2, (c) pose 3, (d) pose 4, with density evaluations presented as colormaps.
Figure 20.
Sparse point clouds of the subject in different poses: (a) pose 1, (b) pose 2, (c) pose 3, (d) pose 4, with density evaluations presented as colormaps.
Figure 21.
The differences in the point distribution obtained by the (a) OpenMVS—Patch-Based Stereo Method; and (b) Agisoft Metashape—Semi-Global Matching from the same synthetic images.
Figure 21.
The differences in the point distribution obtained by the (a) OpenMVS—Patch-Based Stereo Method; and (b) Agisoft Metashape—Semi-Global Matching from the same synthetic images.
Table 1.
Comparison of reconstructions of the same sphere in a single scenario with different textures with number of points and reconstruction accuracy.
Table 1.
Comparison of reconstructions of the same sphere in a single scenario with different textures with number of points and reconstruction accuracy.
Texture Type | Number of Points | The RMSE of Deviations from 500 mm Sphere [mm] |
---|
RGB noise (random colours) | 11,497,207 | 0.26 |
Colour triangles | 7,307,546 | 0.52 |
Stained glass | 9,722,841 | 0.67 |
Random colour line segments | 10,683,159 | 0.33 |
Random black points | 11,708,933 | 0.33 |
Table 2.
Invalid reconstruction coverage predictions ratio for different scene types.
Table 2.
Invalid reconstruction coverage predictions ratio for different scene types.
Number of Cameras | Error Predictions [%] |
---|
2-camera | 3.75 |
3-camera | 2.86 |
4-camera | 2.79 |
6-camera | 3.9 |
9-camera | 3.8 |
12-camera | 4.1 |
Table 3.
Statistics for the differences between predicted and evaluated density under different scenarios with outliers removed.
Table 3.
Statistics for the differences between predicted and evaluated density under different scenarios with outliers removed.
Scene Type | Average Difference (%) | RMS Difference (%) | Median Difference (%) | Standard Deviation (%) |
---|
2-camera | 2.49 | 0.019 | 1.97 | 1.84 |
3-camera | 2.83 | 0.039 | 2.14 | 2.26 |
4-camera | 15.97 | 0.019 | 15.05 | 10.68 |
6-camera | 10.72 | 0.118 | 9.19 | 7.19 |
9-camera | 8.79 | 0.089 | 7.28 | 5.94 |
12-camera | 10.73 | 0.099 | 9.81 | 5.61 |
Table 4.
Statistics for differences between predicted and evaluated accuracy under different scenarios with outliers removed.
Table 4.
Statistics for differences between predicted and evaluated accuracy under different scenarios with outliers removed.
Scene Type | Average Difference (%) | RMSE (%) | Median Difference (%) | Standard Deviation (%) |
---|
2-camera | 50.84 | 0.35 | 44.79 | 26.29 |
3-camera | 21.37 | 0.29 | 15.83 | 16.79 |
4-camera | 21.95 | 0.32 | 15.54 | 17.76 |
6-camera | 20.93 | 0.29 | 15.46 | 15.78 |
9-camera | 20.05 | 0.29 | 16.48 | 13.11 |
12-camera | 28.89 | 0.44 | 25.85 | 17.41 |
Table 5.
Statistics of coverage, density, and accuracy evaluation for different setups without outliers.
Table 5.
Statistics of coverage, density, and accuracy evaluation for different setups without outliers.
Value | Average Setup | Quasi-Optimal Setup |
---|
Coverage | 68.2% | 92.3% |
Average density | | |
Median density | | |
Average accuracy | 0.29 mm | 0.42 mm |
Median accuracy | 0.19 mm | 0.28 mm |
Table 6.
Statistics for the difference between prediction and evaluation for different setups with outliers removed.
Table 6.
Statistics for the difference between prediction and evaluation for different setups with outliers removed.
Value | Statistics Type | Average Setup (%) | Quasi-Optimal Setup (%) |
---|
Coverage | Error prediction | 5.38 | 3.36 |
Density | Average difference | 32.0 | 23.17 |
RMSE | 0.38 | 0.21 |
Median difference | 25.53 | 21.42 |
Standard deviation | 26.73 | 12.07 |
Accuracy | Average difference | 36.85 | 34.00 |
RMSE | 0.41 | 0.32 |
Median difference | 28.9 | 28.67 |
Standard deviation | 25.38 | 22.10 |
Table 7.
Density evaluation statistics for different poses of a human.
Table 7.
Density evaluation statistics for different poses of a human.
Value | Pose 1 | Pose 2 | Pose 3 | Pose 4 |
---|
Average density | | | | |
Median density | | | | |
Table 8.
Statistics for the differences between predicted and evaluated density for different human poses with outliers removed.
Table 8.
Statistics for the differences between predicted and evaluated density for different human poses with outliers removed.
Value | Statistics Type | Pose 1 (%) | Pose 2 (%) | Pose 3 (%) | Pose 4 (%) |
---|
Density | Average difference | 39.8 | 44.2 | 33.8 | 45.0 |
RMSE | 1.43 | 1.71 | 1.39 | 1.63 |
Median difference | 40.9 | 45.7 | 35.6 | 46.4 |
Standard deviation | 9.97 | 9.74 | 10.69 | 8.09 |