1. Introduction
The accurate and practical optical 3D reconstruction of transparent objects has been an open challenge for the field of optical metrology [
1,
2,
3]. The main difficulty in using conventional practical optical metrology tools such as structured light scanning, laser scanning, and photogrammetry to reconstruct transparent objects, is due to the combined phenomena of transmission, reflection, and refraction noticed in transparent objects [
4]. This fact does not allow the use of the aforementioned practical optical metrology tools, all of which require high levels of diffuse surface reflection to operate [
5].
The typical way of getting around this limitation is to render the transparent object’s surface opaque, by spray-coating it with a diffuse reflection layer [
6]. The transparent object can then be reconstructed with conventional visible-spectrum optical metrology tools [
6]. The process of spraying the object, however, is time-consuming, adds considerable costs to the reconstruction process, and might not be allowed in some applications (sensitive cultural heritage objects, food and drink containers) due to the risk of contamination.
There have been multiple attempts to create alternative spray-less digitization techniques for transparent objects in the past [
1,
2]. The proposed techniques include the use of OPT with refractive index-matching liquid [
7], the fringe projection for detection of pattern differences [
8], infra-red (IR) to detect induced surface heating [
9,
10], Infrared Digital Holography [
11], ultra-violet (UV) fluorescence [
12,
13], a combination of X-ray tomography and photogrammetry [
14], shape from polarization [
15], a combination of polarization imaging and inverse rendering [
16], shape from interaction [
17], the visual hull technique [
18], various AI-based image processing methods [
19,
20] terahertz (THz) Imaging and tomography [
21,
22], passive single-pixel imaging [
23], and edge estimation computer vision techniques [
24].
None of the spray-less techniques suggested above, however, possess the combination of advantages of conventional visible-spectrum optical metrology tools, that of being concurrently cheap-to-use, rapid, practical, non-contact, and easily automated.
In this work, we investigate the possibility of achieving the 3D reconstruction of large thin-walled transparent objects for quality control purposes in the beverage packaging industry (worth an estimated USD 144.40 billion in 2023 according to Ref. [
25]) and for digital preservation purposes in cultural heritage applications. The solution proposed herein retains all the aforementioned characteristics of conventional optical metrology tools already in use in these industries today, but can additionally operate without the need to use opaque spray. To achieve this, the use of OPT [
26] without the use of refractive index-matching liquid was investigated.
2. Materials and Methods
The novelty in our approach lies in the specific setup being able to take advantage of the cone-beam xCT principle, the Radon Transform, by swapping the detector and light source in the original fan beam architecture. Additionally, we carefully selected the objects to be thin-walled, cylindrically symmetric objects in air, in order to enable the measurement without the use of index-matching liquid. As shown in Appendices
Appendix A.1 and
Appendix A.2, the light propagation characteristics of visible light rays for these types of objects are close to what OPT requires in order to perform an accurate reconstruction. Some errors are still expected due to the minimal but non-zero refraction induced by the object’s sidewalls.
2.1. Software and Hardware Parameter Settings
The software used to perform the tomography calculations from the collected images was the Astra Software Toolbox v2.0 [
27] a popular open-source X-ray CT package. The Astra toolbox was selected because it was developed by an academic team, it is a well-known and commonly used tool in FBP as documented in multiple academic papers on the subject, and therefore evaluated thoroughly in terms of accuracy and correctness of the algorithms used [
27].
The available scanning setup configurations in the Toolbox are: for 2D scanning, a ‘fan-beam’ and a ‘parallel beam’ setup. For 3D scanning, ‘cone’ and ‘parallel3d’ setups are available. A diagram depicting all four readily available configuration modes is shown in
Figure 1.
In this work, the ‘cone’ configuration
Figure 1d was used by replacing the X-ray source with an optical camera, and the X-ray detectors with a field light source (LCD Panel). It is assumed that the camera used can be modeled by the ‘pinhole camera model’, and therefore we essentially use the 3D ‘cone’ beam geometry in reverse. For this reason, the calibration parameters required for this setup are not explicitly transferable to the optical camera setup in the sense that the result will not be of the correct scale.
In the ‘cone’ beam setup used, the following parameters are required to be set in the Astra Toolbox namely: CCD x and y pixel distance (set to 1), number of pixel rows in the detector (512), number of pixel columns in the detector (512), explicit projection angles (64 points of view), the distance between source and center of rotation (70 cm), the distance between the center of rotation and detector array (20 cm).
By using these values we acquire the correct object shape, but it is scale-less. Therefore, here we need to scale the object appropriately by either measuring its exact distance from the camera or by scaling the results with the measurement of a known object. In our case, we performed the latter.
A polished glass ball 80 mm in diameter (i.e., a ‘lens ball’ in specialist photography) was used for calibration, shown in
Figure 2. Since the sphere is not hollow and has a convex shape, its outer surface was reconstructed by manually extracting its silhouette from each of the 64 axial rotations by the well-known ‘visual hull’ 3D reconstruction method [
28]. The sphere’s reconstruction was then loaded in point cloud processing software (Cloud Compare [
29],
https://www.cloudcompare.org/ accessed on 7 March 2023) and its diameter was measured. The measured diameter was then divided by the true 80 mm sphere diameter to acquire the system’s calibrated scaling factor.
The computational hardware used for our calculations was a single laptop, equipped with a single Intel(R) Core(TM) i7-1065G7 CPU @ 1.30 GHz 1.50 GHz CPU and an onboard NVidia graphics card NVIDIA(R) GeForce(R) MX250 with 2 GB GDDR5 memory. However, for the Astra toolbox calculations, only the CPU mode was used without GPU acceleration. This means that the speed of the computation can be significantly improved from the current time required (2–5 min per object) by the use of a stronger CPU and GPU and by enabling GPU acceleration.
2.2. Reconstruction Algorithm Used
The Astra Toolbox software uses the filtered backpropagation algorithm [
30] used in Xray-CT to reconstruct the measured volume density from the photographs taken at each rotation angle. The density of each voxel
was calculated per slice
at a particular
z height by the integral in Equation (
1) [
27].
where
is the rotation stage’s angle, x and y are the particular slice’s voxel locations and
is the filtered Fourier transform of the detected image described in Equation (
2).
where
is the frequency in the Fourier domain, the spatial dimension of the 1D absorption measurement of each slice (the row of pixels of the photograph acquired at each rotation).
3. Results
3.1. Experimental Procedure
To measure the shape of thin-walled transparent items with OPT without the use of refractive index-matching fluid, the following experimental sequence, similar to that of a typical OPT workflow, was used:
A rotation stage, a camera, and a field light source were set up as in the apparatus shown in
Figure 3;
Light from a field source (LCD Panel) was projected through the object and registered at the camera;
The object is placed in the middle of the rotation stage and is rotated to acquire 64 rotational views around the object;
The images acquired by the black and white camera are inverted so that areas of high absorption are bright and areas of low absorption appear darker;
The images were processed by the Astra Toolbox’ [
27] X-ray CT reconstruction software into a density volume;
Thresholding the voxels of the 3D density volume to remove the low density of air. We are then left with the higher density voxels of the object;
To extract a single surface from the thresholded density volume, we then post-process the object density volume slice-by-slice and line-by-line, to extract only the peak densities on each row of the image plane. Identifying the peaks represents the areas with the most dense material and hence those of the sidewall (
Figure 4);
The peak locations are then scaled using the calibrated scaling factor calculated in
Section 2.1 and saved in a file in point cloud format.
3.2. Objects Selected
The types of objects to which this measurement principle is most suited to, are hollow, thin-walled, cylindrically symmetric objects, which do not induce considerable refraction as light traverses through them.
Hollow objects can be further subdivided into two categories, namely ‘shelled’ (objects whose internal and external surfaces are identical in shape and one is scaled down relative to the other by the size of the wall thickness) usually made of plastic. The other type is that of ‘non-shelled’ hollow objects (objects whose internal and external surfaces are not identical in shape and therefore have variations of thickness around the object), which are commonly made of glass.
Both types are of interest in manufacturing and cultural heritage applications, which this investigation aims to focus on. To test the category of ‘shelled’ objects that are commonly used in the beverage industry (e.g., soda, water bottles), we selected a soda bottle and two water bottles (
Figure 5a–c). In order to test the category of hollow ‘non-shelled’ objects, applicable mainly to cultural heritage and drink and food containers made of glass, contemporary glass cups (two liqueur and one wine glass) with and without embossed features were measured. (
Figure 5d–f).
However ‘non-shelled’ hollow objects are more difficult to reconstruct since the thickness of the material is not consistent around the whole object and therefore of varying refraction. They contain areas where the light passing through the object encounters thick layers of material and therefore gets refracted significantly (neck, base, bottom of cup area). Additionally, a lot of cultural heritage items also contain embossed features which further add to the variation of material thickness in specific areas of the object. We nevertheless tested such items, to test the limits of the suggested OPT method.
Before reconstructing the hollow ‘non-shelled’ selected in this work, they were cut down for two reasons. The first is that their sidewall thickness needed to be accurately measured by use of electronic calipers (
Table 1) and this could not be done from the mouth area, which is much thicker. The second reason is so that they could fit in the field of view of the camera used in the OPT setup, which could only measure objects of about 150 mm in height (
Figure 2). To measure the hollow areas of the glass objects selected (wine and liqueur glasses cup area), they were placed inverted onto the rotation table with the hollow side down and the stem and base pointing up. In any case, only the hollow parts of the glass items were considered, and the stem and base which contain thick material areas by default were ignored in this study.
4. Results
4.1. 3D Reconstruction Accuracy
To obtain reference 3D reconstruction results for the outside shapes of the measured items, we scanned the objects with a conventional white light structured light optical scanner used for industrial purposes, called the Shining 3D Einscan Pro 2X scanner (
Figure 6). The reference scanner was calibrated to an accuracy of ±22 μm using the calibration plates, which are provided by Shining 3D. To use this scanner, the transparent objects needed to be coated with the ‘AESUB blue’ opaque spray coating.
Then, using the CloudCompare [
29] point cloud software, both the reference point cloud reconstructions and the point cloud reconstructions created by the OPT process were first aligned by hand and then aligned more accurately via Iterative Closest Point (ICP) to an error tolerance of 10
. Finally, to extract the dimensional error, the residual point cloud distances were calculated. The point cloud errors are depicted as color textures on the OPT point clouds in
Figure 7 and the numerical average of the point cloud distances for each object is reported in
Table 2.
There are multiple methods of comparing point clouds [
31,
32,
33], using point-to-point, point-to-mesh, and mesh-to-mesh strategies. We opted for using the closest point-to-point distance, rather than comparing point-to-mesh or mesh-to-mesh, because in our case, the reference point clouds created by the structured light scanner were extremely dense, and therefore it was not necessary to create a mesh surface to accurately compare the point clouds, as suggested by Ref. [
32].
For hollow ‘non-shelled’ objects, the average distance errors (
mm) as expected were higher on average than that measured for the hollow ‘shelled’ objects (
mm). This fact alone, however, was not reflective of the much wider type of errors experienced on these objects due to intense refraction effects, which manifested as distorted embossed shapes (
Figure 5e), artificial ‘ghost material’ partially filling up the hollow areas (
Figure 8), and the reduction the object’s size (
Figure 8).
The maximum precision expected from the specific OPT setup, in general, was calculated by dividing the available camera pixels by the field of view and was found to be
mm per pixel. Therefore the minimum dimensional error expected on the lateral and vertical distances is half this value,
mm. This sanity check is in line with the measurements we collected (
Table 2). The measurement with the lowest error achieved was an average point cloud distance of
mm between the OPT and the reference reconstructions for the ‘Selinari’ water bottle (
Table 2).
4.2. Benefits over Visual Hull 3D Reconstruction
In this work, we use OPT to extract the shape of thin-walled objects as a single surface, and it is, therefore, worth qualitatively comparing it to another technique, the visual hull, which is very similar and used during calibration. The visual hull technique can operate in the visible spectrum without the use of spray coatings. It is well known that it can extract only the external convex shape via the use of silhouettes [
28], which is why it was used to measure our calibration sphere in
Section 2.1.
For hollow non-convex objects, however, it is known that this technique cannot be used as it produces a solid convex 3D shell around the object. For example, when this technique was used in Ref. [
18] to reconstruct a wine glass, the opening of the hollow end was covered. Similarly, it is known that any convex cavities (small craters) around the external surface of the object are ‘filled up’ due to the nature of the visual hull technique. OPT being a tomography technique, similar to X-ray CT, does not have these drawbacks as it can reconstruct hollow objects and can also deal with convex surface structures.
What is more, OPT can also measure internal surfaces. To demonstrate OPT’s ability to measure internally, we placed two cut-offs of plastic bottles, one inside the other, and the reconstructed result is shown in
Figure 9. However, since we could not perform a reference measurement for internal surfaces (e.g., using an X-ray CT machine), it was not possible to confirm the achievable accuracy.
4.3. Comparison with Multi-View Stereo (MVS) Photogrammetry
MVS requires a surface texture to operate, which is why it is well known to perform very poorly on transparent objects [
17]. We attempted to reconstruct the objects in this study using MVS as it is one of the most commonly used camera-based reconstruction techniques today, in order to contrast its reconstruction quality to that achieved by the OPT technique.
In
Table 2 the prohibitive RMS errors involved in reconstructing transparent objects using MVS, when compared to our reference 3D reconstruction performed via structured light can be noticed. In
Figure 10, these large errors are visualized by aligning the point clouds to the reference reconstructions, and coloring each point of the MVS reconstructed the point cloud with its minimum distance to the reference point cloud.
4.4. Comparison with Neural Radiance Fields (NeRF)
NeRFs are a relatively new reconstruction technique [
34]. It uses artificial intelligence to build a non-linear relationship between the input, which is a single continuous 5D coordinate (the spatial location
and viewing direction (
)), and the output, which is is the volume density and view-dependent emitted radiance at that spatial location.
It is primarily used for rendering purposes but it can also recover the voxelized 3D shape of the object. Due to the complication of light transport between views the reconstruction of transparent objects is not fully successful. It is however more successful than the multi-view stereo shown in
Section 4.3.
In this section the point clouds created via NeRF with the data acquired by the reference reconstructions in
Figure 11 are compared. It can be observed clearly in
Figure 11 that NeRF perfoms better than MVS but worse than OPT. The numerical averages of the errors shown in
Table 2 also confirm this observation.
4.5. Summary
A summary of the comparisons performed to our reference 3D reconstructions, is found in
Table 2 where we numerically compare the average point cloud error achieved by OPT, MVS, and NeRF. It is clearly seen that OPT is much more accurate in reconstructing the external surface of these transparent objects most of the time, with its accuracy being an order of magnitude better than the other two.
If we perform a qualitative study in the 3D reconstruction quality of the same reconstructed object, shown in
Figure 12, we notice that OPT retains the most surface details and also has the highest level of reconstruction completeness. Secondly, we notice NeRF with an acceptable level of completeness but without the ability to reconstruct any of the surface details, and lastly, MVS, which has both very poor completeness and reconstruction fidelity.
Compared to other reconstruction methods which have been suggested for the reconstruction of transparent objects mentioned in the introduction, the cost of OPT is minimal, as the only two things required are a field illumination source such as a large LED, a means of rotation, and a black and white camera. The speed of the method has to be divided into acquisition speed and data processing speed, which can be done asynchronously if required. Since the acquisition is performed by cameras, it can potentially be performed on the level of milliseconds. The data processing speed demonstrated here can also be improved many times over by the use of parallel GPUs and with the use of more professional hardware. Regarding the potential for complete automation, it can be completely automated either by adding a robotic arm or via the use of a conveyor belt.
When it comes to the specific class of transparent objects considered in this study, OPT therefore does seem to have the potential to provide near real-time 3D reconstruction at an incredibly low price and with much higher accuracy than any of the techniques that have preceded it so far.
5. Discussion
The use of OPT over traditional X-ray CT has many benefits. X-ray CT reconstructions are cumbersome, slow, expensive, and present health risks to the operators. On the other hand, one of the main downsides of OPT is the necessary use of index-matching liquid. This work demonstrates the use of OPT without the need to use index-matching liquid by successfully reconstructing a specific class of large hollow and thin-walled objects.
The use of OPT in this work was investigated for two use cases in particular: the quality control of plastic bottles in the beverage packaging industry, and the reconstruction of glass objects in the context of a cultural heritage digital preservation application. Representative plastic and glass objects for these cases were collected and reconstructed in 3D using OPT.
It was shown that for plastic bottles produced by the beverage packaging industry, an average accuracy of mm can be achieved with the setup used. The fact that the best point cloud accuracy achieved mm was close to the theoretical precision of the setup mm indicates that with an even more precise setup, a higher accuracy could potentially be achieved.
For glass objects in the context of cultural heritage on the other hand, which on average have thicker sidewalls and also have some areas with thick optical paths, considerable refraction is produced. Areas that suffer more from this were the lower parts of the hollow areas, which have thicker sidewalls than the rim, the glass joint between the stem and the vessel, and the embossed designs on the glass surface. The average shape error of the glass objects measured was higher than that of plastic objects, as expected, at mm.
The relatively small numerical difference in average dimensional error of ≈0.6 mm between the ‘shelled’ and ‘non-shelled’ hollow objects however, does not accurately reflect the rather large qualitative difference of the reconstruction results achieved on the ‘non-shelled’ hollow objects, as it was noticed that hollow areas were being filled with ‘ghost material’, embossed features were being distorted, and objects were appearing smaller than their true size.
The dimensional errors experienced on ‘non-shelled’ hollow objects were in part expected as they are due to the non-use of refractive-index matching material. Not using ‘refractive index matching liquid’ induces a large amount of refraction for hollow objects with thick sidewalls, as predicted analytically in
Section 2.
The advantages of using OPT for particular cultural heritage and industrial applications, where the use of opaque spray coatings is forbidden, is first and foremost that there is no need to use opaque spray coatings to perform the 3D measurement. A second great advantage is the ability to reconstruct internal structures (provided they too are thin-walled), something conventional optical tools cannot do. Additionally, it retains all the characteristics which make conventional optical metrology tools (structured light, laser scanning, and photogrammetry) attractive to industrial and cultural heritage applications, namely: it is cheap and easy to use, safe for human exposure, is camera-based, and it has the potential to perform extremely fast data acquisition, whilst also having a high degree of reproduction fidelity and accuracy.
The disadvantage of using OPT is that a narrow class of objects are possible to reconstruct with high accuracy, namely hollow ‘shelled’ objects such as plastic bottles. When the sidewalls of the object start to either become much larger in size or divert too much from being cylindrically symmetric, refraction effects become important enough to distort the shape of the objects considerably and therefore reduce the accuracy of the technique.
When comparing the results with other established 3D reconstruction techniques namely MVS and NeRF, we can see the qualitative difference both in completeness and surface feature detail. A side-by-side comparison of the point clouds created is shown in
Figure 12.
6. Conclusions and Future Work
In summary, it is demonstrated to the best of our knowledge for the first time, that it is possible to use OPT without the use of refractive index-matching liquid for the reconstruction of objects with sizes larger than 10 mm. The only condition that needs to be met to achieve high accuracy is that the object’s sidewall must be thin and consistent enough not to induce significant amounts of refraction. This technique could therefore potentially be used for dimensional quality assurance of hollow ‘shelled’ objects such as plastic bottles in the beverage packaging industry. On the other hand, for the reconstruction of ‘non-shelled’ objects such as glass cultural heritage objects, the reconstruction technique is shown to be much more prone to errors due to the higher levels of refraction, which produce severe dimensional and aesthetic distortions to embossed features, as well as artificial object shrinking and the artificial filling of some parts of the hollow object’s volume.
In future work, it is aimed to improve the speed of the data processing at least 10-fold, by using improved hardware including fast GPU processing that is supported by Astra Toolbox. The accuracy of the technique is aimed to be improved to a wider class of objects, namely hollow “non-shelled” objects by mitigating the refraction errors induced via optical modeling of the light propagation through the object. Furthermore, the maximum shape deviation that can be tolerated will be simulated numerically via Zeemax (Canonsburg, PA, USA), an optical simulation software that performs ray tracing and is used in professional lens and camera design.
7. Patents
An international PCT patent application with number: PCT/GR2023/000051 has been submitted as a result of this work.
Author Contributions
Conceptualization, P.I.S.; methodology, P.I.S.; software, P.I.S. and X.Z.; validation, P.I.S. and T.G.; formal analysis, P.I.S. and X.Z.; investigation, P.I.S.; resources, T.G. and X.Z.; data curation, P.I.S.; writing—original draft preparation, P.I.S.; writing—review and editing, P.I.S. and X.Z.; visualization, P.I.S.; supervision, X.Z.; project administration, X.Z.; funding acquisition, P.I.S. and X.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This project was funded by the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement No. 886094, and the Craeft Horizon Europe Research and Research Innovation Action of the European Commission, Grant Agreement No. 101094349.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data related to this projects is not publicly available due to commercialization and PCT patent application embargo. They will be made available on request after a decision on the patent has been reached.
Acknowledgments
We would like to thank the Non-Destructive Techniques Laboratory of the University of West Attica for the use of their structured light 3D scanner, which was used to acquire reference 3D reconstructions for the objects studied in this publication.
Conflicts of Interest
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Appendix A
Optical Projection Tomography (OPT) [
26,
35], as the name suggests, is a tomographic method, whose principle of operation is similar to that of X-ray computed tomography (CT). It reconstructs objects in 3D by measuring light absorption through an object from different points of view, with the difference being that X-ray CT uses the X-ray spectrum of light to perform the reconstruction and OPT uses the visible spectrum of light.
For visible light to replicate the functionality of X-ray light in X-ray CT, two principles need to be adhered to: the first is that light must be able to pass through the object and be detected on the other end (i.e., not absorbed completely so that the absorption is measurable), and the second is that the light that passes through the object is not ‘bent’ or refracted as it passes through the object [
26].
The two conditions above are normally achieved in OPT for biological imaging where the measured object is very small in size (1–10 mm in thickness) so that visible light can shine through it [
26], and by use of refractive index-matching liquid so that its refractive index (RI) matches that of the environment. No refraction occurs as light passes through [
36]. This refractive index matching process is called ‘optical or tissue clearing’ because it has the effect of making the biological tissue less refractive to visible light and easier to photograph clearly.
For light rays in the visible spectrum, glass material used for creating drinking cups found in cultural heritage collections nominally has an RI between ≈1.3 and 1.7 [
37]. Polyethylene terephthalate (PET) on the other hand, a very common plastic material, has a nominal RI of ≈1.58 [
37]. Both materials therefore have an RI that is very different to that of the surrounding medium of air (RI
) and therefore we would normally expect considerable refraction when visible light rays propagate through these materials.
However, for very thin-walled, cylindrically-symmetric objects, such as plastic bottles and wine glasses, two effects come into play, which minimize refraction: the first is that the wall thickness is so thin that the refraction effect is expected to be insignificant (
Appendix A.1), and the second is the fact that the objects are mostly cylindrical, which means that parallel light rays will experience two, almost opposite, beam shifts as they pass through, ‘self-correcting’ their path (
Appendix A.2). Therefore, for these types of objects, it is valid to assume that OPT can be achieved directly in air, without the use of index-matching liquid. We quantify these phenomena via simulation in the following sections.
Appendix A.1. Measuring Visible Light Ray Beam Shift for Small Sidewall Thicknesses
It is assumed in this work that the refraction occurring at any part of a transparent object can be approximated to that of a flat thin slab of transparent material. This is a valid approximation because the sidewalls are very thin and the surface texture contours are much larger (≈1 mm–10 cm) than the scale of the wavelength of light used (400–600 nm). Therefore at the scale of the photon, every light ray striking object will essentially experience the surface as a thin flat slab of material. To measure the parallel beam-shift effect of a light ray through a thin slab (
Figure A1), we use Snell’s law of refraction [
4] (Equation (
A1)) both on the incoming and also on the outgoing surface of each sidewall.
Using the flat slab approximation and Snell’s law it is possible to calculate the amount of parallel beam-shift for various angles of incidence and slab thickness. The calculations were performed for the glass and plastic materials used in the objects we have selected to measure, with an RI of 1.51 for BK7 glass (
Figure A2) and 1.58 for PET (
Figure A3). As can be seen in
Figure A2 and
Figure A3, the beam shift increases with the angle of incidence and reaches a maximum that is almost equal to the slab’s thickness at an angle of incidence of
from surface normal.
Figure A1.
Effect of parallel beam shift predicted by Snell’s Law of refraction, when a beam is incident on a slab of material of RI in a medium of RI .
Figure A1.
Effect of parallel beam shift predicted by Snell’s Law of refraction, when a beam is incident on a slab of material of RI in a medium of RI .
The systematic analysis which was carried out therefore shows that, for the glass and plastic materials that were selected, the beam shift will be smaller than that of the sidewall thickness at any angle of incidence and therefore at any point on the object. For the hollow objects selected, the sidewalls (0.15–2 mm) are small compared to their diameter (52–80 mm), so refraction is not expected to affect the reconstructed shape considerably. Especially so, when combined with the effect described in
Appendix A.2.
Figure A2.
Beam shift distance d as the angle of incidence increases between 1and 89 as calculated by Snell’s law, for PET of RI 1.58 and different thicknesses h between 0.2 and 2 mm.
Figure A2.
Beam shift distance d as the angle of incidence increases between 1and 89 as calculated by Snell’s law, for PET of RI 1.58 and different thicknesses h between 0.2 and 2 mm.
Figure A3.
Beam shift distance d as the angle of incidence increases between 1 and 89 as calculated by Snell’s law, for BK7 glass of RI of 1.51 and different thicknesses h between 0.2 and 2 mm.
Figure A3.
Beam shift distance d as the angle of incidence increases between 1 and 89 as calculated by Snell’s law, for BK7 glass of RI of 1.51 and different thicknesses h between 0.2 and 2 mm.
Appendix A.2. Light Rays Propagating through Hollow Circular Objects
The second effect that makes hollow cylindrical objects especially measurable using OPT, is that parallel light rays experience two opposing beam shifts as they propagate through these objects: a first parallel beam shifts towards the center of the object as the ray enters the hollow transparent object, and a second beam shift away from the center of the object as the ray exits the object. This effect is simulated for multiple parallel beams entering a hollow thin-walled circular disk made of PET with an RI of 1.58 in
Figure A4.
When the camera is placed sufficiently far away, the rays that reach the lens are those which are mostly initiated in parallel. This can be seen in the performed ray tracing simulations (
Figure A4), where it can be seen that parallel beams become slightly convergent after passing through the object. This is advantageous as these rays can be collected by a single camera placed ‘far away’ from the object, without the need to ‘stitch an image’ such as in other large-scale OPT approaches [
38]. In our experiment, we placed our camera 70 cm from the object which, compared to the object diameters of 5–8 cm is at least a ≈9:1 distance-to-size ratio. The field light source selected was a white light LED panel and it was also placed as far back as possible whilst concurrently being able to illuminate the whole object (in our case 20 cm away) So the total distance between the light source and camera was 90 cm.
In non-hollow cylindrically-symmetric objects filled with PET, on the other hand, the light beams pass through a lot more optical material, which causes larger amounts of refraction. The overall beam paths are therefore changed by a far greater amount, resulting in an intense lensing effect that does not enable easy reconstruction using OPT (
Figure A4).
Figure A4.
Refraction of visible light rays from a solid disk object (
a) and a hollow disk object with a thin size wall (
b) both with an RI of 1.58. For a thin-walled hollow object, the rays experience less refraction. Created using [
39].
Figure A4.
Refraction of visible light rays from a solid disk object (
a) and a hollow disk object with a thin size wall (
b) both with an RI of 1.58. For a thin-walled hollow object, the rays experience less refraction. Created using [
39].
References
- Ihrke, I.; Kutulakos, K.N.; Lensch, H.P.A.; Magnor, M.; Heidrich, W. Transparent and Specular Object Reconstruction. Comput. Graph. Forum 2010, 29, 2400–2426. [Google Scholar] [CrossRef]
- Meriaudeau, F.; Rantoson, R.; Adal, K.M.; Fofi, D.; Stolz, C. Non-conventional imaging systems for 3D digitization of transparent objects: Shape from polarization in the IR and shape from visible fluorescence induced UV. In Proceedings of the 3RD International Topical Meeting on Optical Sensing and Artificial Vision: OSAV’2012, Saint Petersburg, Russia, 14–17 May 2012; AIP Publishing: Melville, NY, USA, 2013; pp. 34–40. [Google Scholar] [CrossRef]
- Karami, A.; Battisti, R.; Menna, F.; Remondino, F. 3D Digitization of Transparent and Glass Surfaces: State of the Art and Analysis of Some Methods. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2022, XLIII-B2-2022, 695–702. [Google Scholar] [CrossRef]
- Born, M.; Wolf, E. Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th ed.; Cambridge University Press: Cambridge, UK, 1999. [Google Scholar] [CrossRef]
- Stavroulakis, P.I.; Leach, R.K. Invited Review Article: Review of post-process optical form metrology for industrial-grade metal additive manufactured parts. Rev. Sci. Instruments 2016, 87, 041101. [Google Scholar] [CrossRef]
- Yang, Y.; Chen, S.; Wang, L.; He, J.; Wang, S.-M.; Sun, L.; Shao, C. Influence of Coating Spray on Surface Measurement Using 3D Optical Scanning Systems. In Volume 1: Additive Manufacturing; Manufacturing Equipment and Systems; Bio and Sustainable Manufacturing, Proceedings of the International Manufacturing Science and Engineering Conference, Erie, PA, USA, 10–14 June 2019; American Society of Mechanical Engineers: New York, NY, USA, 2019; p. V001T02A009. [Google Scholar] [CrossRef]
- Trifonov, B.; Bradley, D.; Heidrich, W. Tomographic Reconstruction of Transparent Objects. In Proceedings of the ACM SIGGRAPH 2006 Sketches, SIGGRAPH ’06, New York, NY, USA, 30 July 2006; p. 55-es. [Google Scholar] [CrossRef]
- Guo, H.; Zhou, H.; Banerjee, P.P. Use of structured light in 3D reconstruction of transparent objects. Appl. Opt. 2022, 61, B314–B324. [Google Scholar] [CrossRef]
- Landmann, M.; Speck, H.; Dietrich, P.; Heist, S.; Kühmstedt, P.; Notni, G. Fast 3D Shape Measurement of Transparent Glasses by Sequential Thermal Fringe Projection. EPJ Web Conf. 2020, 238, 06008. [Google Scholar] [CrossRef]
- Meriaudeau, F.; Alonso Sanchez Secades, L.; Eren, G.; Ercil, A.; Truchetet, F.; Aubreton, O.; Fofi, D. 3-D Scanning of Nonopaque Objects by Means of Imaging Emitted Structured Infrared Patterns. IEEE Trans. Instrum. Meas. 2010, 59, 2898–2906. [Google Scholar] [CrossRef]
- Huang, H.; Yuan, E.; Zhang, D.; Sun, D.; Yang, M.; Zheng, Z.; Zhang, Z.; Gao, L.; Panezai, S.; Qiu, K. Free Field of View Infrared Digital Holography for Mineral Crystallization. Cryst. Growth Des. 2023, 23, 7992–8008. [Google Scholar] [CrossRef]
- Rantoson, R.; Stolz, C.; Fofi, D.; Meriaudeau, F. Optimization of transparent objects digitization from visible fluorescence ultraviolet induced. Opt. Eng. 2012, 51, 033601. [Google Scholar] [CrossRef]
- Hullin, M.B.; Fuchs, M.; Ihrke, I.; Seidel, H.P.; Lensch, H.P.A. Fluorescent Immersion Range Scanning. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
- Fried, P.; Woodward, J.; Brown, D.; Harvell, D.; Hanken, J. 3D scanning of antique glass by combining photography and computed tomography. Digit. Appl. Archaeol. Cult. Herit. 2020, 18, e00147. [Google Scholar] [CrossRef]
- Ferraton, M.; Stolz, C.; Mériaudeau, F. Optimization of a polarization imaging system for 3D measurements of transparent objects. Opt. Express 2009, 17, 21077–21082. [Google Scholar] [CrossRef]
- Shao, M.; Xia, C.; Duan, D.; Wang, X. Polarimetric Inverse Rendering for Transparent Shapes Reconstruction. arXiv 2022, arXiv:abs/2208.11836. [Google Scholar]
- Michel, D.; Zabulis, X.; Argyros, A.A. Shape from interaction. Mach. Vis. Appl. 2014, 25, 1077–1087. [Google Scholar] [CrossRef]
- Mikhnevich, M.; Laurendeau, D. Shape from Silhouette in Space, Time and Light Domains. In Proceedings of the 9th International Conference on Computer Vision Theory and Applications, Lisbon, Portugal, 5–8 January 2014; SCITEPRESS—Science and Technology Publications: Setúbal, Portugal, 2014; pp. 368–377. [Google Scholar] [CrossRef]
- Munkberg, J.; Hasselgren, J.; Shen, T.; Gao, J.; Chen, W.; Evans, A.; Mueller, T.; Fidler, S. Extracting Triangular 3D Models, Materials, and Lighting From Images. arXiv 2021, arXiv:2111.12503. [Google Scholar] [CrossRef]
- Li, Z.; Yeh, Y.Y.; Chandraker, M. Through the Looking Glass: Neural 3D Reconstruction of Transparent Shapes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1259–1268. [Google Scholar] [CrossRef]
- Guerboukha, H.; Nallappan, K.; Skorobogatiy, M. Toward real-time terahertz imaging. Adv. Opt. Photonics 2018, 10, 843. [Google Scholar] [CrossRef]
- Recur, B.; Younus, A.; Salort, S.; Mounaix, P.; Chassagne, B.; Desbarats, P.; Caumes, J.P.; Abraham, E. Investigation on reconstruction methods applied to 3D terahertz computed tomography. Opt. Express 2011, 19, 5105. [Google Scholar] [CrossRef]
- Mathai, A.; Guo, N.; Liu, D.; Wang, X. 3D Transparent Object Detection and Reconstruction Based on Passive Mode Single-Pixel Imaging. Sensors 2020, 20, 4211. [Google Scholar] [CrossRef]
- Phillips, C.; Lecce, M.; Daniilidis, K. Seeing Glassware: From Edge Detection to Pose Estimation and Shape Recovery. In Proceedings of the Robotics: Science and Systems, Virtual, 12–16 July 2016. [Google Scholar] [CrossRef]
- Fortune Business Insights, Beverage Packaging Market Size, Share and Industry Analysis, by Material, by Product Type, by Application, and Regional Forecasts 2023–2030, Report ID: FBI102112. Available online: https://www.fortunebusinessinsights.com/beverage-packaging-market-102112 (accessed on 28 November 2023).
- Sharpe, J.; Ahlgren, U.; Perry, P.; Hill, B.; Ross, A.; Hecksher-Sørensen, J.; Baldock, R.; Davidson, D. Optical Projection Tomography as a Tool for 3D Microscopy and Gene Expression Studies. Science 2002, 296, 541–545. [Google Scholar] [CrossRef] [PubMed]
- van Aarle, W.; Palenstijn, W.J.; Cant, J.; Janssens, E.; Bleichrodt, F.; Dabravolski, A.; Beenhouwer, J.D.; Batenburg, K.J.; Sijbers, J. Fast and flexible X-ray tomography using the ASTRA toolbox. Opt. Express 2016, 24, 25129–25147. [Google Scholar] [CrossRef] [PubMed]
- Laurentini, A. The visual hull concept for silhouette-based image understanding. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 150–162. [Google Scholar] [CrossRef]
- Cloud Compare Software. Available online: https://www.cloudcompare.org/ (accessed on 7 March 2023).
- Feldkamp, L.A.; Davis, L.C.; Kress, J.W. Practical cone-beam algorithm. J. Opt. Soc. Am. A 1984, 1, 612–619. [Google Scholar] [CrossRef]
- Helmholz, P.; Belton, D.; Oliver, N.; Hollick, J.; Woods, A.J. The Influence of the Point Cloud Comparison Methods on the Verification of Point Clouds Using the Batavia Reconstruction as a Case Study. In Proceedings of the Sixth International Congress for Underwater Archaeology (IKUWA6 Shared Heritage), Fremantle, Australia, 28 November–2 December 2016; Archaeopress Archaeology: Oxford, UK, 2020. [Google Scholar]
- Antova, G. Application of Areal Change Detection Methods Using Point Clouds Data. IOP Conf. Ser. Earth Environ. Sci. 2019, 221, 012082. [Google Scholar] [CrossRef]
- Seitz, S.; Curless, B.; Diebel, J.; Scharstein, D.; Szeliski, R. A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Volume 1 (CVPR’06), New York, NY, USA, 17–22 June 2006. [Google Scholar] [CrossRef]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020. [Google Scholar]
- Figueiras, E.; Soto, A.M.; Jesus, D.; Lehti, M.; Koivisto, J.; Parraga, J.E.; Silva-Correia, J.; Oliveira, J.M.; Reis, R.L.; Kellomäki, M.; et al. Optical projection tomography as a tool for 3D imaging of hydrogels. Biomed. Opt. Express 2014, 5, 3443–3449. [Google Scholar] [CrossRef]
- Ertürk, A.; Becker, K.; Jährling, N.; Mauch, C.P.; Hojer, C.D.; Egen, J.G.; Hellal, F.; Bradke, F.; Sheng, M.; Dodt, H.U. Three-dimensional imaging of solvent-cleared organs using 3DISCO. Nat. Protoc. 2012, 7, 1983–1995. [Google Scholar] [CrossRef]
- MATWEB Material Properties. Available online: https://www.matweb.com/search/DataSheet.aspx?MatGUID=a696bdcdff6f41dd98f8eec3599eaa20&ckck=1. (accessed on 7 March 2023).
- Lee, K.J.I.; Calder, G.M.; Hindle, C.R.; Newman, J.L.; Robinson, S.N.; Avondo, J.J.H.Y.; Coen, E.S. Macro optical projection tomography for large scale 3D imaging of plant structures and gene activity. J. Exp. Bot. 2016, 68, 527–538. [Google Scholar] [CrossRef]
- Ray Simulator for Optics. Available online: https://phydemo.app/ray-optics/simulator/ (accessed on 7 March 2023).
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).