Next Article in Journal
Multi-Stage Platform for (Semi-)Automatic Planning in Reconstructive Orthopedic Surgery
Previous Article in Journal
Adaptive Real-Time Object Detection for Autonomous Driving Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Example-Based Multispectral Photometric Stereo for Multi-Colored Surfaces

Graduate School of Information Sciences, Hiroshima City University, Hiroshima 731-3194, Japan
*
Author to whom correspondence should be addressed.
J. Imaging 2022, 8(4), 107; https://doi.org/10.3390/jimaging8040107
Submission received: 25 February 2022 / Revised: 5 April 2022 / Accepted: 6 April 2022 / Published: 11 April 2022

Abstract

:
A photometric stereo needs three images taken under three different light directions lit one by one, while a color photometric stereo needs only one image taken under three different lights lit at the same time with different light directions and different colors. As a result, a color photometric stereo can obtain the surface normal of a dynamically moving object from a single image. However, the conventional color photometric stereo cannot estimate a multicolored object due to the colored illumination. This paper uses an example-based photometric stereo to solve the problem of the color photometric stereo. The example-based photometric stereo searches the surface normal from the database of the images of known shapes. Color photometric stereos suffer from mathematical difficulty, and they add many assumptions and constraints; however, the example-based photometric stereo is free from such mathematical problems. The process of our method is pixelwise; thus, the estimated surface normal is not oversmoothed, unlike existing methods that use smoothness constraints. To demonstrate the effectiveness of this study, a measurement device that can realize the multispectral photometric stereo method with sixteen colors is employed instead of the classic color photometric stereo method with three colors.

1. Introduction

The photometric stereo method is not suitable for modeling a moving object since several images with different directions of the light source are needed. The color photometric stereo method can measure the shape of a moving object, which employs red, green, and blue lights in three different directions. Unlike the common color photometric stereo method, we use 16 narrow-band lights with different peak wavelengths while observing the target object with a 16-band multispectral camera.

1.1. Related Work

The shape-from-shading method [1,2,3,4,5,6] and the photometric stereo method [7,8] estimate the surface normal of an object by illuminating the object and analyzing the resulting shadings on the object’s surface. Unlike shape-from-shading, which uses one image, the photometric stereo captures three images with different light source directions. Therefore, it is impossible to measure a dynamic object. This problem can be resolved using the color photometric stereo method [9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28] (also known as shape-from-color). The color photometric stereo takes one picture with an RGB color camera under red, green, and blue light sources. Such a one-shot photograph enables the measurement of a dynamic object. However, the color photometric stereo has many problems. The major problem of the color photometric stereo method is the fact that it can only be used with white objects. This is an inevitable problem as long as lights are illuminated from colored light sources to estimate the surface normal.
Some methods [20,25,29] use multiple images to apply a color photometric stereo to multicolored objects. These methods cannot estimate the surface normal from a single image; thus, the optical flow method is used to track the identical point on the object surface among multiple images. Fyffe et al. [16] used three lights that can be recognized as white color by the human eye. The target objects are observed by a six-band camera. Each of the three lights has a different spectral distribution, which can be distinguished by the six-band camera. They estimate the surface normal without disturbing the human eye’s appearance. As conducted by Anderson et al. [9], using the shape from other methods such as multiview stereo enables the color photometric stereo to be applied to multicolored objects. Chakrabarti et al. [11] and Jiao et al. [19] assumed that a certain limited area has the same albedo. This assumption enables a color photometric stereo to be applied to multicolored objects that can be segmented for each colored region.
Example-based photometric stereos [30,31,32,33,34] estimate the surface normal using a database search. Those methods capture some images of objects with known shapes. They assume that the material properties of the objects in the database and the objects to be measured are the same. If the appearances of the pixels among those two types of objects are the same, these pixels might have the same surface normal. The example-based photometric stereo is used for a conventional photometric stereo problem, which assumes the same albedo for each light and is not used for the color photometric stereo problem since the albedo differs for each light.

1.2. Our Work

In this paper, the problem faced by the color photometric stereo method is solved using a different approach from those used in previous studies. We use the example-based photometric stereo to solve the problem of the color photometric stereo. Our approach solves the problem of shadow, specular reflection, and channel crosstalk.
Unlike Guo et al. [35], our method can be applied to the objects whose chromaticity and albedo are both spatially varying. The techniques of Gotardo et al. [29], Kim et al. [20], and Roubtsova et al. [25] need to employ optical flow to measure a dynamic object, while the technique of Fyffe et al. [16] requires a reflectance database to be prepared prior to the measurement. Our proposed technique does not require a shape obtained from other sensors such as a multi-view stereo or a laser sensor, unlike the technique of Anderson et al. [9]. Moreover, unlike the techniques of Chakrabarti et al. [11] and Jiao et al. [19], our proposed method does not require region segmentation. Our method is not oversmoothed by median filtering [36] and is not affected by randomness [37].
Previous color photometric stereo methods used three lights with red, green, and blue colors and observed the object with an RGB color camera. In our study, 16 lights with different wavelengths are used to illuminate the object, which is then observed by a 16-band multispectral camera. This paper empirically proves that the example-based photometric stereo is also useful for color photometric stereo situations.
Section 2 and Section 3 explain the fundamental theory of the color photometric stereo and example-based photometric stereo, respectively. Section 4 explains our example-based multispectral photometric stereo. Section 5 and Section 6 show the experimental results and the conclusion, respectively. In particular, Section 5.5 discusses the advantages and disadvantages of our method.

2. Color Photometric Stereo

A photometric stereo method that employs independent colored light is called the color photometric stereo method. A characteristic of this method is that it enables the estimation of the surface normal with one photoshoot. The widespread color photometric stereo method is conducted with three types of colored lights. While the conventional photometric stereo method results in several grayscale images, the color photometric stereo method results in a multi-spectral image.
Given n as a normal vector and l c as the light source direction vector of channel c, then the multispectral image can be:
I c = A c max ( n · l c , 0 ) .
Hereinafter, we call A c albedo. Note that the camera sensitivity and light source brightness are included in A c .
As shown in Figure 1, this study conducts a photoshoot of a multicolored object using 16 channels. Following Equation (1), the brightness is obtained from this photoshoot as follows.
I 0 = A 0 max ( n · l 0 , 0 ) , I 1 = A 1 max ( n · l 1 , 0 ) , I 15 = A 15 max ( n · l 15 , 0 ) .
The surface normal n is a 3D vector; however, the degree-of-freedom is two because it is constrained to be a unit vector (such constraint reduces one degree-of-freedom). Albedo A c is represented by 16 parameters. There are 16 equations, as shown in Equation (2), and 18 unknown parameters ( A 0 , A 1 , ..., A 15 , n x , n y , n z , s.t., n x 2 + n y 2 + n z 2 = 1 , namely 16 for albedo and 2 for surface normal). Therefore, color photometric stereo is an ill-posed problem.
The most commonly used assumption is to limit the color of the target objects to white ( A 0 = A 1 = = A 15 ). The color photometric stereo for white objects, or in other words, the conventional photometric stereo, can directly solve the surface normal without iterative optimization nor additional constraints, such as smoothness constraints. However, this paper analyzes the methods with multi-colored objects.

3. Example-Based Photometric Stereo

The example-based photometric stereo (Figure 2) uses the reference objects with known shapes for estimating the surface normal, which can be applied to non-Lambertian surfaces. The example-based photometric stereo measures two objects with known and unknown shapes under the same scene. Those two objects should have the same material property.
A sphere is often used for reference objects. Both brightnesses coincide if the surface normal of the target object and the surface normal of the reference object coincide because the material property, light direction, and camera direction are the same. Therefore, the example-based photometric stereo can estimate the surface normal of the objects with an arbitrary BRDF (bidirectional reflection distribution function). The disadvantage of the example-based photometric stereo is that the reference objects whose material property is the same as the target objects are needed. The interreflection between each surface point is not considered in this method.

4. Proposed Method

4.1. Example-Based Multispectral Photometric Stereo

Existing methods add some constraints such as smoothness to solve since the unknowns exceed the inputs. Such an approach oversmoothes the albedo and the surface normal. Our method does not require any constraints.
We observe the object illuminated under 16 lights with different wavelengths using the multispectral camera (Figure 1). The observation vector at pixel ( y Q , x Q ) of query image (the image of target object) is denoted as ( I Q , 0 , I Q , 1 , , I Q , 15 ) and the observation vector at pixel ( y R , x R ) of reference image (the image of the database) is denoted as ( I R , 0 , I R , 1 , , I R , 15 ) . If the query’s albedo ( A Q , 0 , A Q , 1 , , A Q , 15 ) and the reference’s albedo ( A R , 0 , A R , 1 , , A R , 15 ) coincide and the query’s observation vector and the reference’s observation vector coincide, the surface normal at ( y R , x R ) and the surface normal at ( y Q , x Q ) coincide. Each element of the 16-dimensional observation vector (Figure 3) is Equation (2).
We search the pixel position of the reference object where the query’s observation vector coincides with the reference’s observation vector (Figure 4). The query’s surface normal is determined from the pixel position of the reference found. Multiple spheres with different paints are used as the reference. The search of the observation vector is performed for all pixels of all reference spheres.
Our method (Equation (3)) searches the pixel position where the squared error of the 16-dimensional vector becomes the minimum.
n ( y Q , x Q ) = n R ( y R , x R ) , s . t . ( y R , x R ) = argmin y R , x R c C ( I Q ( y Q , x Q , c ) I R ( s , y R , x R , c ) ) 2 , s S , ( y R , x R ) P R .
Here, | C | is the number of channels ( | C | = 16 ), | S | is the number of reference objects, and P R is a set of reference’s pixels. We normalize the observation vectors of both the query image and the reference image. Thanks to the normalization, our method can be applied even if the camera exposure is changed.
In order to apply our method to any objects with any paints, we have to measure all paints in the world. However, the variation of paints is limited due to the limitation of chemical reactions. The number of paints is limited if the paints are based on pure natural pigments since the number of natural pigments is limited. In this paper, we assume that all paints can be expressed in a limited number. We used 18 spheres with different colors ( | S | = 18 ).

4.2. Converting Surface Normal to Height

The shape is represented as the height H set for each pixel. The partial derivatives of the heights with respect to x and y are called gradient and represented as p and q, respectively.
p = H x = H x , q = H y = H y .
The surface normal n is represented by these gradients, as shown below.
n = p , q , 1 p 2 + q 2 + 1 .
The cost function that relates the surface normal to the height is shown below.
H x p 2 + H y q 2 d x d y .
We solve Equation (6) to calculate the height from the surface normal using existing techniques.

4.3. Channel Crosstalk

The conventional color photometric stereo assumes that the camera spectral response is a delta function. Figure 5b is an example where only the G channel detects the 550 (nm) light. On the other hand, Figure 5a is an example where the sensor has channel crosstalks. Namely, the spectral responses of R, G, and B channels partially overlap in the spectral domain. In this example, the sensor detects ( R , G , B ) = ( 63 , 255 , 63 ) instead of ( R , G , B ) = ( 0 , 255 , 0 ) (Figure 5b) when 550 (nm) light is observed. Namely, the red and blue channels are excited even if the observed light is completely green. Such channel crosstalk is annoying for the conventional color photometric stereo. The conventional color photometric stereo assumes that, for example, only the green channel should detect the green light. Channel crosstalk commonly occurs in most cameras, which makes the color photometric stereo difficult. However, as discussed in Section 5.5, our method is free from the channel crosstalk problem.

5. Experiment

5.1. Experimental Setup

We perform our experiment in a dark room, as shown in Figure 6, where the target object is illuminated under 16 different lights. We use IMEC-HS-16-USB-customized (Imec, Belgium) for the multispectral camera. Figure 7 and Table 1 show the spectral sensitivity of the camera, where channel crosstalks are occurring among all camera channels. Table 2 shows the peak wavelength for each light source used in this experiment. To increase the amount of supplementary information obtained for objects with narrow-wavelength regions, light sources of close wavelengths were positioned opposite to each other. Namely, as shown in Table 2, the light of the next larger wavelength is set far apart in more than one Manhattan distance in 4 × 4 grid. The locations of the light sources and the camera were left unchanged during the experiments. We assume that the light source and the camera are infinitely far from the target object. This paper represents the surface normal as pseudo-color, where x, y, and z of the normal vector are mapped to R, G, and B of the image. Each sphere image is trimmed and scaled to 128 × 128 size. The sphere objects shown in Figure 8 are painted with 18 different paints. The size of the query image is 512 × 256 . The target object is opaque objects. Our method can estimate the surface normal of metals if the number of lights is infinity, but it cannot estimate with a finite number of lights. Transparent objects are more difficult to measure due to the transmission.

5.2. Evaluation

First, we measured a spherical object, shown in Figure 9a, consisting of two types of albedos painted with the paints included in the reference objects. The error is evaluated as an angle between the estimated surface normal and the true surface normal. We have to compare the estimated surface normal with the true surface normal by measuring the object whose true surface normal is known. We measured a sphere for evaluation. The mathematically true surface normal can be theoretically derived from the sphere’s center and radius. Suppose that the pixel of interest is ( x , y ) and the center of the sphere is ( x ¯ , y ¯ ) . Suppose that the radius of the sphere is r. Then, the true surface normal ( n x , n y , n z ) can be calculated as follows:
n x = x x ¯ / r ,
n y = y y ¯ / r ,
n z = 1 n x 2 n y 2 .
Since we know the true surface normal from Equations (7)–(), we can evaluate the performance of the method by measuring a sphere. Figure 9b–d show the error map with pseudo-color representation. We compared our method with the conventional photometric stereo (Figure 9b). The color photometric stereo that assumes white objects as targets is the same as the conventional photometric stereo. Furthermore, we compared our method with an existing method [35] (Figure 9c). The error of the conventional photometric stereo (color photometric stereo with white object) was 0.690 (rad), the error of existing method (Guo et al. [35]) was 0.888 (rad), and the error of our method was 0.198 (rad), which proves the high performance of our method.

5.3. Real Objects

We apply the existing method [36] and our method to the object shown in Figure 10a. The estimated surface normals of the existing and proposed methods are shown in Figure 10b,c, respectively. Here, the surface normal of x, y, and z axes are represented as red, green, and blue colors. Unlike the existing method, which oversmoothes the result (Figure 10b), our method is a pixelwise approach, and the result is not oversmoothed (Figure 10c). The existing method [36] needs to segment the object region from the background (Figure 10b), while our method does not need to distinguish the foreground and the background. The existing method cannot estimate the surface normal of the background, while our method can; however, the surface normal of the background is just noise since the background has no object with a completely dark void and random noise (Figure 10c).
The target objects are shown in Figure 11a. The paints used in Figure 11(3,4) are included in the reference data, while the others are not. The results of a multi-colored object, a white object, a single-colored object, an object with dark color, and a deformable object with two different poses are shown in Figure 11(1)–(6), respectively. The estimated surface normals of our method are shown in Figure 11b. Figure 11c,d show the reconstructed shapes under two different viewing directions. The quantitative evaluation shown in Section 5.2 proves the benefit of our method, and the qualitative evaluation shown in Figure 11 also proves the benefit of our method. As shown in Figure 11, our method can successfully estimate the surface normals for both achromatic (Figure 11(2)) and chromatic (Figure 11(1)) objects without oversmoothing them.

5.4. Discussion

We did not to add smoothness constraints, and thus, our result is not oversmoothed. Adding smoothness constraints results in smoother results, which are often required by the users. If we add some constraints, we have to tune the parameters of those constraints. Figure 12 shows the parameter tuning problem that occurred in the existing method [36]. In our future work, we would like to add smoothness constraints, but we have to carefully design the algorithm because adding smoothness constraints is not always a good approach due to the oversmoothing and parameter tuning.
Our method is applicable to multi-colored objects, as shown in the experiments, where error did not occur at the color boundary of the object (Figure 11(1)). Our method is robust to specular reflection, as shown in the experiments, where a spike-like error did not appear in the result (Figure 9c). Our method cannot estimate the surface normal of the dark surface; however, this disadvantage is always true to all other photometric stereo methods (Figure 11(4)).

5.5. Contribution

Here, we summarize our advantages and disadvantages.
Our method does not suffer from channel crosstalk since the reference object includes the information of channel crosstalk, and the query object and the reference object are measured under the same light and the same camera. Namely, our method is not affected by the spectral distribution of the lights and the spectral/radiometric response of the camera since both the query object and reference object are measured under the same lights and with the same camera. Our process is pixelwise, and thus, the result is not affected by neighboring pixels. The light source direction does not need to be measured because the target and reference objects are illuminated under the same illumination environment. Furthermore, we do not adjust each light source to be the same intensity. Our method is not limited to a Lambertian surface, and our method is not affected by shadows. If we prepare reference objects with specular reflection, our method can be applied to the objects with specular reflection.
The disadvantage of our method is that we need many reference objects. Furthermore, we have to measure the query object with the same device that the reference objects are taken since the light and the camera information are included in the reference objects.
The number of reference objects is related to both advantages and disadvantages. If we increase reference objects, our method can be applied to various types of paints. However, a similar observation vector might appear in the database if we increase reference objects. These are the characteristics of the example-based multispectral photometric stereo compared to the example-based conventional photometric stereo. The albedo A 0 , A 1 , , A 15 has 16 degrees-of-freedom in our method but has 1 degree-of-freedom in the example-based photometric stereo. Due to the wider degrees-of-freedom, the unique database search is disabled if we use many reference objects. This is the dilemma of our method whether we should increase or decrease the number of reference objects.

6. Conclusions

Our method estimated the surface normal of multi-colored objects using 16 lights. The light source directions of all lights do not need to be measured. The query and reference objects are observed by a multispectral camera. We measured many spheres painted with a single color with various paints. Surface normals are the same for the two points on the surface if the material properties are the same, the light source directions are the same, and the camera direction is the same. We estimated the surface normal of the target object by finding the pixel where the data of the query image become the same as the data of the reference images.
Our experimental results show that our method has successfully estimated the surface normal of multi-colored objects. However, the dark albedo has caused some errors.
This time, we scanned all reference objects. However, it is well known that the spectral reflectance of any paint can be represented by a small number of basis functions. We conjecture that the bases of the PCA (principal component analysis) can represent the data with a small number of basis functions. Our future work is to install PCA in our method.

Author Contributions

Conceptualization, D.M.; methodology, D.M.; software, D.M. and K.U.; validation, D.M. and K.U.; formal analysis, D.M. and K.U.; investigation, D.M.; resources, D.M.; data curation, D.M. and K.U.; writing—original draft preparation, D.M. and K.U.; writing—review and editing, D.M.; visualization, D.M. and K.U.; supervision, D.M.; project administration, D.M.; funding acquisition, D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by JSPS KAKENHI Grant Number 18H04119 and in part by JSPS KAKENHI Grant Number 20H00612.

Acknowledgments

The authors thank Ryo Furukawa, Masashi Baba, and Michihiro Mikamo for useful discussions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rindfleisch, T. Photometric method for lunar topography. Photogramm. Eng. 1966, 32, 262–277. [Google Scholar]
  2. Horn, B.K.P. Obtaining shape from shading information. In The Psychology of Computer Vision; Winston, P.H., Ed.; McGraw-Hill: New York, NY, USA, 1975; pp. 115–155. [Google Scholar]
  3. Horn, B.K.P.; Brooks, M.J. The variational approach to shape from shading. Comput. Vis. Graph. Image Process. 1986, 33, 174–208. [Google Scholar] [CrossRef] [Green Version]
  4. Horn, B.K.P. Height and gradient from shading. Int. J. Comput. Vis. 1990, 5, 37–75. [Google Scholar] [CrossRef]
  5. Zhang, R.; Tsai, P.S.; Cryer, J.E.; Shah, M. Shape-from-shading: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 690–706. [Google Scholar] [CrossRef] [Green Version]
  6. Durou, J.-D.; Falcone, M.; Sagona, M. Numerical methods for shape-from-shading: A new survey with benchmarks. Comput. Vis. Image Underst. 2008, 109, 22–43. [Google Scholar] [CrossRef]
  7. Silver, W.M. Determining Shape and Reflectance Using Multiple Images. Master’s Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1980. [Google Scholar]
  8. Woodham, R.J. Photometric method for determining surface orientation from multiple images. Opt. Eng. 1980, 19, 139–144. [Google Scholar] [CrossRef]
  9. Anderson, R.; Stenger, B.; Cipolla, R. Color photometric stereo for multicolored surfaces. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2182–2189. [Google Scholar]
  10. Brostow, G.J.; Stenger, B.; Vogiatzis, G.; Hernández, C.; Cipolla, R. Video normals from colored lights. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2104–2114. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  11. Chakrabarti, A.; Sunkavalli, K. Single-image RGB photometric stereo with spatially-varying albedo. In Proceedings of the International Conference on 3D Vision, Stanford, CA, USA, 25–28 October 2016; pp. 258–266. [Google Scholar]
  12. Drew, M.; Kontsevich, L. Closed-form attitude determination under spectrally varying illumination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 21–23 June 1994; pp. 985–990. [Google Scholar]
  13. Drew, M.S. Reduction of rank-reduced orientation-from-color problem with many unknown lights to two-image known-illuminant photometric stereo. In Proceedings of the International Symposium on Computer Vision, Coral Gables, FL, USA, 21–23 November 1995; pp. 419–424. [Google Scholar]
  14. Drew, M.S. Direct solution of orientation-from-color problem using a modification of Pentland’s light source direction estimator. Comput. Vis. Image Underst. 1996, 64, 286–299. [Google Scholar] [CrossRef] [Green Version]
  15. Drew, M.S.; Brill, M.H. Color from shape from color: A simple formalism with known light sources. J. Opt. Soc. Am. A 2000, 17, 1371–1381. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Fyffe, G.; Yu, X.; Debevec, P. Single-shot photometric stereo by spectral multiplexing. In Proceedings of the 2011 IEEE International Conference on Computational Photography (ICCP), Pittsburgh, PA, USA, 8–10 April 2011; pp. 1–6. [Google Scholar]
  17. Hernandez, C.; Vogiatzis, G.; Brostow, G.J.; Stenger, B.; Cipolla, R. Non-rigid photometric stereo with colored lights. In Proceedings of the 2007 IEEE 11th International Conference on Computer Vision, Rio de Janeiro, Brazil, 14–21 October 2007; p. 8. [Google Scholar]
  18. Hernández, C.; Vogiatzis, G.; Cipolla, R. Shadows in three-source photometric stereo. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2008; pp. 290–303. [Google Scholar]
  19. Jiao, H.; Luo, Y.; Wang, N.; Qi, L.; Dong, J.; Lei, H. Underwater multi-spectral photometric stereo reconstruction from a single RGBD image. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, Jeju, Korea, 13–16 December 2016; pp. 1–4. [Google Scholar]
  20. Kim, H.; Wilburn, B.; Ben-Ezra, M. Photometric stereo for dynamic surface orientations. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 59–72. [Google Scholar]
  21. Kontsevich, L.; Petrov, A.; Vergelskaya, I. Reconstruction of shape from shading in color images. J. Opt. Soc. Am. A 1994, 11, 1047–1052. [Google Scholar] [CrossRef]
  22. Landstrom, A.; Thurley, M.J.; Jonsson, H. Sub-millimeter crack detection in casted steel using color photometric stereo. In Proceedings of the 2013 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Hobart, TAS, Australia, 26–28 November 2013; pp. 1–7. [Google Scholar]
  23. Petrov, A.P.; Kontsevich, L.L. Properties of color images of surfaces under multiple illuminants. J. Opt. Soc. Am. A 1994, 11, 2745–2749. [Google Scholar] [CrossRef]
  24. Rahman, S.; Lam, A.; Sato, I.; Robles-Kelly, A. Color photometric stereo using a rainbow light for non-Lambertian multicolored surfaces. In Asian Conference on Computer Vision; Springer: Cham, Switzerland, 2015; pp. 335–350. [Google Scholar]
  25. Roubtsova, N.; Guillemaut, J.Y. Colour Helmholtz stereopsis for reconstruction of complex dynamic scenes. In Proceedings of the 2014 2nd International Conference on 3D Vision, Tokyo, Japan, 8–11 December 2014; pp. 251–258. [Google Scholar]
  26. Vogiatzis, G.; Hernández, C. Practical 3d reconstruction based on photometric stereo. In Computer Vision: Detection, Recognition and Reconstruction; Cipolla, R., Battiato, S., Farinella, G.M., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 313–345. [Google Scholar]
  27. Vogiatzis, G.; Hernandez, C. Self-calibrated, multi-spectral photometric stereo for 3D face capture. Int. J. Comput. Vis. 2012, 97, 91–103. [Google Scholar] [CrossRef] [Green Version]
  28. Woodham, R.J. Gradient and curvature from photometric stereo including local confidence estimation. J. Opt. Soc. Am. 1994, 11, 3050–3068. [Google Scholar] [CrossRef]
  29. Gotardo, P.F.U.; Simon, T.; Sheikh, Y.; Mathews, I. Photogeometric scene flow for high-detail dynamic 3D reconstruction. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 846–854. [Google Scholar]
  30. Goldman, D.B.; Curless, B.; Hertzmann, A.; Seitz, S.M. Shape and spatially-varying BRDFs from photometric stereo. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 1060–1071. [Google Scholar] [CrossRef] [PubMed]
  31. Hertzmann, A.; Seitz, S.M. Example-based photometric stereo: Shape reconstruction with general, varying BRDFs. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1254–1264. [Google Scholar] [CrossRef] [PubMed]
  32. Horn, B.K.P.; Ikeuchi, K. The mechanical manipulation of randomly oriented parts. Sci. Am. 1984, 251, 100–111. [Google Scholar] [CrossRef]
  33. Hui, Z.; Sankaranarayanan, A.C. Shape and spatially-varying reflectance estimation from virtual exemplars. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2060–2073. [Google Scholar] [CrossRef] [PubMed]
  34. Yeung, S.-K.; Wu, T.-P.; Tang, C.-K.; Chan, T.F.; Osher, S. Adequate reconstruction of transparent objects on a shoestring budget. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, USA, 20–25 June 2011; pp. 2513–2520. [Google Scholar]
  35. Guo, H.; Okura, F.; Shi, B.; Funatomi, T.; Mukaigawa, Y.; Matsushita, Y. Multispectral photometric stereo for spatially-varying spectral reflectances: A well posed problem? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 963–971. [Google Scholar]
  36. Miyazaki, D.; Onishi, Y.; Hiura, S. Color photometric stereo using multi-band camera constrained by median filter and occluding boundary. J. Imaging 2019, 5, 64. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. Miyazaki, D.; Hamaen, K. Multi-band photometric stereo using random sampling of channels and pixels. In the International Workshop on Frontiers of Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 79–93. [Google Scholar]
Figure 1. Conceptual explanation of multispectral color photometric stereo. Target object is illuminated by multiple light sources whose wavelengths are different. One image is taken using the multispectral camera.
Figure 1. Conceptual explanation of multispectral color photometric stereo. Target object is illuminated by multiple light sources whose wavelengths are different. One image is taken using the multispectral camera.
Jimaging 08 00107 g001
Figure 2. Brightness search of the example-based photometric stereo.
Figure 2. Brightness search of the example-based photometric stereo.
Jimaging 08 00107 g002
Figure 3. Observation vector.
Figure 3. Observation vector.
Jimaging 08 00107 g003
Figure 4. Our approach.
Figure 4. Our approach.
Jimaging 08 00107 g004
Figure 5. Example of camera spectral sensitivity: (a) the sensor that has channel crosstalk; (b) the sensor that does not have channel crosstalk.
Figure 5. Example of camera spectral sensitivity: (a) the sensor that has channel crosstalk; (b) the sensor that does not have channel crosstalk.
Jimaging 08 00107 g005
Figure 6. Experimental apparatus.
Figure 6. Experimental apparatus.
Jimaging 08 00107 g006
Figure 7. Spectral response of the camera.
Figure 7. Spectral response of the camera.
Jimaging 08 00107 g007
Figure 8. Reference objects.
Figure 8. Reference objects.
Jimaging 08 00107 g008
Figure 9. Performance evaluation result: (a) target spherical object with 2 paints; (b) the error map of the conventional photometric stereo; (c) the error map of the existing method; (d) the error map of the proposed method.
Figure 9. Performance evaluation result: (a) target spherical object with 2 paints; (b) the error map of the conventional photometric stereo; (c) the error map of the existing method; (d) the error map of the proposed method.
Jimaging 08 00107 g009
Figure 10. Comparison: (a) target object, (b) the estimated surface normal of previous method, and (c) the estimated surface normal of proposed method.
Figure 10. Comparison: (a) target object, (b) the estimated surface normal of previous method, and (c) the estimated surface normal of proposed method.
Jimaging 08 00107 g010
Figure 11. Experimental results of the (1) multi-colored object, (2) white object, (3) single-colored object, (4) dark object, and (5,6) deformable object: (a) Target object, (b) estimated surface normal, and (c,d) reconstructed shape.
Figure 11. Experimental results of the (1) multi-colored object, (2) white object, (3) single-colored object, (4) dark object, and (5,6) deformable object: (a) Target object, (b) estimated surface normal, and (c,d) reconstructed shape.
Jimaging 08 00107 g011
Figure 12. Parameter tuning problem of previous method: (a) sharp normal and sharp albedo; (b) smooth normal and sharp albedo; (c) sharp normal and smooth albedo; (d) smooth normal and smooth albedo.
Figure 12. Parameter tuning problem of previous method: (a) sharp normal and sharp albedo; (b) smooth normal and sharp albedo; (c) sharp normal and smooth albedo; (d) smooth normal and smooth albedo.
Jimaging 08 00107 g012
Table 1. The spectral response for each channel of the camera.
Table 1. The spectral response for each channel of the camera.
Channel 1Channel 2Channel 3Channel 4
Peak 488 nmPeak 499 nmPeak 479 nmPeak 469 nm
Peak 50% 488–492 nmPeak 50% 495–503 nmPeak 50% 467–486 nmPeak 50% 464–474 nm
Channel 5Channel 6Channel 7Channel 8
Peak 599 nmPeak 609 nmPeak 587 nmPeak 575 nm
Peak 50% 459–465, 595–602 nmPeak 50% 464–470, 606–615 nmPeak 50% 583–591 nmPeak 50% 570–578 nm
Channel 9Channel 10Channel 11Channel 12
Peak 641 nmPeak 644 nmPeak 631 nmPeak 622 nm
Peak 50% 483–488, 635–646 nmPeak 50% 489–497, 637–646 nmPeak 50% 626–638 nmPeak 50% 468–473, 616–627 nm
Channel 13Channel 14Channel 15Channel 16
Peak 539 nmPeak 552 nmPeak 525 nmPeak 513 nm
Peak 50% 535–543 nmPeak 50% 547–555 nmPeak 50% 521–532 nmPeak 50% 509–519 nm
Table 2. Peak wavelength of each light (10 nm width).
Table 2. Peak wavelength of each light (10 nm width).
Light 1Light 2Light 3Light 4
488 nm632 nm540 nm500 nm
Light 5Light 6Light 7Light 8
647 nm600 nm470 nm610 nm
Light 9Light 10Light 11Light 12
520 nm568 nm620 nm473 nm
Light 13Light 14Light 15Light 16
636 nm515 nm589 nm550 nm
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Miyazaki, D.; Uegomori, K. Example-Based Multispectral Photometric Stereo for Multi-Colored Surfaces. J. Imaging 2022, 8, 107. https://doi.org/10.3390/jimaging8040107

AMA Style

Miyazaki D, Uegomori K. Example-Based Multispectral Photometric Stereo for Multi-Colored Surfaces. Journal of Imaging. 2022; 8(4):107. https://doi.org/10.3390/jimaging8040107

Chicago/Turabian Style

Miyazaki, Daisuke, and Kazuya Uegomori. 2022. "Example-Based Multispectral Photometric Stereo for Multi-Colored Surfaces" Journal of Imaging 8, no. 4: 107. https://doi.org/10.3390/jimaging8040107

APA Style

Miyazaki, D., & Uegomori, K. (2022). Example-Based Multispectral Photometric Stereo for Multi-Colored Surfaces. Journal of Imaging, 8(4), 107. https://doi.org/10.3390/jimaging8040107

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop