Improving CNN-Based Texture Classification by Color Balancing
Abstract
:1. Introduction
2. Related Works
2.1. Color Texture Classification under Varying Illumination Conditions
2.2. Color Balancing
3. Materials and Methods
3.1. RawFooT
- 4 acquisitions with a D65 illuminant of varying intensity (100%, 75%, 50%, 25% of the maximum);
- 9 acquisitions which were only a portion of one of the monitors lit to obtain directional light (approximately 24, 30, 36, 42, 48, 54, 60, 66 and 90 degrees between the direction of the incoming light and the normal to the texture sample);
- 12 acquisitions with both monitors entirely projecting simulated daylight (D4, …, D95);
- 6 acquisitions with the monitor simulating artificial light (L27, ..., L65);
- 9 acquisitions with simultaneously change of both the direction and the color of light;
- 3 acquisitions with the two monitors simulating a different illuminant (L27+D65, L27+D95 and D65+D95);
- 3 acquisitions with both monitors projecting pure red, green and blue light.
3.2. Color Balancing
- device-raw: it does not make any correction to the device-dependent raw values, leaving them unaltered from how they are recorded by the camera sensor;
- dcraw-srgb: it performs a full color characterization according to the standard color correction pipeline. The chosen characterization illuminant is the D65 standard illuminant, while the color mapping is linear and fixed for all illuminants that may occur. The correction is performed using the DCRaw software (available at http://www.cybercom.net/~dcoffin/dcraw/);
- linear-srgb: it performs a full color characterization according to the standard color correction pipeline, but using different illumination color compensation and different linear color mapping for each illuminant;
- rooted-srgb: it performs a full color characterization according to the standard color correction pipeline, but using a different illuminant color compensation and a different color mapping for each illuminant. The color mapping is no more linear but it is performed by polynomially expanding the device-dependent colors with a rooted second-degree polynomial.
4. Experimental Setup
4.1. RawFooT Database Setup
- Daylight temperature: 132 subsets obtained by combining all the 12 daylight temperature variations. Each subset is composed of training and test patches with different light temperatures.
- LED temperature: 30 subsets obtained by combining all the six LED temperature variations. Each subset is composed of training and test patches with different light temperatures.
- Daylight vs. LED: 72 subsets obtained by combining 12 daylight temperatures with six LED temperatures.
4.2. Visual Descriptors
- BVLC AlexNet (BVLC AlexNet): this is the AlexNet trained on ILSVRC 2012 [1].
- Medium CNN (Vgg M-2048-1024-128): it has three modifications of the Vgg M network, with a lower-dimensional last fully-connected layer. In particular we use a feature vector of 2048, 1024 and 128 size [51].
- Vgg Very Deep 19 and 16 layers (Vgg VeryDeep 16 and 19): the configuration of these networks has been achieved by increasing the depth to 16 and 19 layers, which results in a substantially deeper network than the previously ones [2].
- ResNet 50 is a residual network. Residual learning frameworks are designed to ease the training of networks that are substantially deeper than those used previously. This network has 50 layers [52].
4.3. Texture Classification
5. Results and Discussion
6. Conclusions
Acknowledgments
Author Contributions
Conflicts of Interest
References
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 2014, 1–14. [Google Scholar]
- Zhou, B.; Lapedriza, A.; Xiao, J.; Torralba, A.; Oliva, A. Learning Deep Features for Scene Recognition using Places Database. In Advances in Neural Information Processing Systems 27; Neural Information Processing Systems (NIPS): Montreal, QC, Canada, 2014; pp. 487–495. [Google Scholar]
- Chen, Y.H.; Chao, T.H.; Bai, S.Y.; Lin, Y.L.; Chen, W.C.; Hsu, W.H. Filter-invariant image classification on social media photos. In Proceedings of the 23rd ACM International Conference on Multimedia, Brisbane, Australia, 26–30 October 2015. [Google Scholar]
- Gijsenij, A.; Gevers, T.; Van De Weijer, J. Computational color constancy: Survey and experiments. IEEE Trans. Image Process. 2011, 20, 2475–2489. [Google Scholar] [PubMed]
- Barnard, K.; Funt, B. Camera characterization for color research. Color Res. Appl. 2002, 27, 152–163. [Google Scholar] [CrossRef]
- Bianco, S.; Schettini, R. Error-tolerant Color Rendering for Digital Cameras. J. Math. Imaging Vis. 2014, 50, 235–245. [Google Scholar] [CrossRef]
- Bianco, S.; Schettini, R.; Vanneschi, L. Empirical modeling for colorimetric characterization of digital cameras. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009. [Google Scholar]
- Cusano, C.; Napoletano, P.; Schettini, R. Evaluating color texture descriptors under large variations of controlled lighting conditions. J. Opt. Soc. Am. A 2016, 33, 17–30. [Google Scholar]
- Razavian, A.S.; Azizpour, H.; Sullivan, J.; Carlsson, S. CNN features off-the-shelf: An astounding baseline for recognition. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef]
- Bianconi, F.; Harvey, R.; Southam, P.; Fernández, A. Theoretical and experimental comparison of different approaches for color texture classification. J. Electron. Imaging 2011, 20, 043006. [Google Scholar] [CrossRef]
- Palm, C. Color texture classification by integrative Co-occurrence matrices. Pattern Recognit. 2004, 37, 965–976. [Google Scholar] [CrossRef]
- Mäenpää, T.; Pietikäinen, M. Classification with color and texture: Jointly or separately? Pattern Recognit. 2004, 37, 1629–1640. [Google Scholar] [CrossRef]
- Seifi, M.; Song, X.; Muselet, D.; Tremeau, A. Color texture classification across illumination changes. In Proceedings of the Conference on Colour in Graphics, Imaging, and Vision, Joensuu, Finland, 14–17 June 2010. [Google Scholar]
- Cusano, C.; Napoletano, P.; Schettini, R. Combining local binary patterns and local color contrast for texture classification under varying illumination. J. Opt. Soc. Am. A 2014, 31, 1453–1461. [Google Scholar] [CrossRef] [PubMed]
- Cusano, C.; Napoletano, P.; Schettini, R. Local Angular Patterns for Color Texture Classification. In New Trends in Image Analysis and Processing – ICIAP 2015 Workshops; Murino, V., Puppo, E., Sona, D., Cristani, M., Sansone, C., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 111–118. [Google Scholar]
- Drimbarean, A.; Whelan, P. Experiments in colour texture analysis. Pattern Recognit. Lett. 2001, 22, 1161–1167. [Google Scholar] [CrossRef]
- Bianconi, F.; Fernández, A.; González, E.; Armesto, J. Robust color texture features based on ranklets and discrete Fourier transform. J. Electron. Imaging 2009, 18, 043012. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Cimpoi, M.; Maji, S.; Vedaldi, A. Deep filter banks for texture recognition and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Cusano, C.; Napoletano, P.; Schettini, R. Combining multiple features for color texture classification. J. Electron. Imaging 2016, 25, 061410. [Google Scholar] [CrossRef]
- Buchsbaum, G. A spatial processor model for object colour perception. J. Frankl. Inst. 1980, 310, 1–26. [Google Scholar] [CrossRef]
- Cardei, V.C.; Funt, B.; Barnard, K. White point estimation for uncalibrated images. In Proceedings of the Seventh Color Imaging Conference: Color Science, Systems, and Applications Putting It All Together, CIC 1999, Scottsdale, AZ, USA, 16–19 November 1999. [Google Scholar]
- Van de Weijer, J.; Gevers, T.; Gijsenij, A. Edge-based color constancy. IEEE Trans. Image Process. 2007, 16, 2207–2214. [Google Scholar] [CrossRef] [PubMed]
- Forsyth, D.A. A novel algorithm for color constancy. Int. J. Comput. Vis. 1990, 5, 5–35. [Google Scholar] [CrossRef]
- Gehler, P.V.; Rother, C.; Blake, A.; Minka, T.; Sharp, T. Bayesian color constancy revisited. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
- Bianco, S.; Ciocca, G.; Cusano, C.; Schettini, R. Improving Color Constancy Using Indoor-Outdoor Image Classification. IEEE Trans. Image Process. 2008, 17, 2381–2392. [Google Scholar] [CrossRef] [PubMed]
- Bianco, S.; Cusano, C.; Schettini, R. Color Constancy Using CNNs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Boston, MA, USA, 7–12 June 2015. [Google Scholar]
- Bianco, S.; Cusano, C.; Schettini, R. Single and Multiple Illuminant Estimation Using Convolutional Neural Networks. IEEE Trans. Image Process. 2017. [Google Scholar] [CrossRef] [PubMed]
- McCamy, C.S.; Marcus, H.; Davidson, J. A color-rendition chart. J. App. Photog. Eng. 1976, 2, 95–99. [Google Scholar]
- ISO. ISO/17321-1:2006: Graphic Technology and Photography – Colour Characterisation of Digital Still Cameras (DSCs) – Part 1: Stimuli, Metrology and Test Procedures; ISO: Geneva, Switzerland, 2006. [Google Scholar]
- Finlayson, G.D.; Drew, M.S. Constrained least-squares regression in color spaces. J. Electron. Imaging 1997, 6, 484–493. [Google Scholar] [CrossRef]
- Vrhel, M.J.; Trussell, H.J. Optimal scanning filters using spectral reflectance information. In IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology; International Society for Optics and Photonics: San Jose, CA, USA, 1993; pp. 404–412. [Google Scholar]
- Bianco, S.; Bruna, A.R.; Naccari, F.; Schettini, R. Color correction pipeline optimization for digital cameras. J. Electron. Imaging 2013, 22, 023014. [Google Scholar] [CrossRef]
- Bianco, S.; Bruna, A.; Naccari, F.; Schettini, R. Color space transformations for digital photography exploiting information about the illuminant estimation process. J. Opt. Soc. Am. A 2012, 29, 374–384. [Google Scholar] [CrossRef] [PubMed]
- Bianco, S.; Gasparini, F.; Schettini, R.; Vanneschi, L. Polynomial modeling and optimization for colorimetric characterization of scanners. J. Electron. Imaging 2008, 17, 043002. [Google Scholar] [CrossRef]
- Finlayson, G.D.; Mackiewicz, M.; Hurlbert, A. Root-polynomial colour correction. In Proceedings of the 19th Color and Imaging Conference, CIC 2011, San Jose, CA, USA, 7–11 November 2011. [Google Scholar]
- Kang, H.R. Computational Color Technology; Spie Press: Bellingham, WA, USA, 2006. [Google Scholar]
- Schettini, R.; Barolo, B.; Boldrin, E. Colorimetric calibration of color scanners by back-propagation. Pattern Recognit. Lett. 1995, 16, 1051–1056. [Google Scholar] [CrossRef]
- Kang, H.R.; Anderson, P.G. Neural network applications to the color scanner and printer calibrations. J. Electron. Imaging 1992, 1, 125–135. [Google Scholar]
- Bianconi, F.; Fernández, A. An appendix to “Texture databases—A comprehensive survey”. Pattern Recognit. Lett. 2014, 45, 33–38. [Google Scholar] [CrossRef]
- Hossain, S.; Serikawa, S. Texture databases—A comprehensive survey. Pattern Recognit. Lett. 2013, 34, 2007–2022. [Google Scholar] [CrossRef]
- Wyszecki, G.; Stiles, W.S. Color Science; Wiley: New York, NY, USA, 1982. [Google Scholar]
- Anderson, M.; Motta, R.; Chandrasekar, S.; Stokes, M. Proposal for a standard default color space for the internet—sRGB. In Proceedings of the Color and imaging conference, Scottsdale, AZ, USA, 19–22 November 1996. [Google Scholar]
- Ramanath, R.; Snyder, W.E.; Yoo, Y.; Drew, M.S. Color image processing pipeline. IEEE Signal Process. Mag. 2005, 22, 34–43. [Google Scholar] [CrossRef]
- Von Kries, J. Chromatic adaptation. In Festschrift der Albrecht-Ludwigs-Universität; Universität Freiburg im Breisgau: Freiburg im Breisgau, Germany, 1902; pp. 145–158. [Google Scholar]
- Bianco, S.; Schettini, R. Computational color constancy. In Proceedings of the 2011 3rd European Workshop on Visual Information Processing (EUVIP), Paris, France, 4–6 July 2011. [Google Scholar]
- Nayatani, Y.; Takahama, K.; Sobagaki, H.; Hashimoto, K. Color-appearance model and chromatic-adaptation transform. Color Res. Appl. 1990, 15, 210–221. [Google Scholar] [CrossRef]
- Bianco, S.; Schettini, R. Two new von Kries based chromatic adaptation transforms found by numerical optimization. Color Res. Appl. 2010, 35, 184–192. [Google Scholar] [CrossRef]
- Chatfield, K.; Simonyan, K.; Vedaldi, A.; Zisserman, A. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531 2014, 1–11. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 27–30 June 2016. [Google Scholar]
- Vedaldi, A.; Lenc, K. MatConvNet—Convolutional Neural Networks for MATLAB. CoRR 2014. [Google Scholar] [CrossRef]
- Napoletano, P. Hand-Crafted vs Learned Descriptors for Color Texture Classification. In International Workshop on Computational Color Imaging; Springer: Berlin, Germany, 2017; pp. 259–271. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014; Springer: Berlin, Germany, 2014; pp. 818–833. [Google Scholar]
- Sermanet, P.; Eigen, D.; Zhang, X.; Mathieu, M.; Fergus, R.; LeCun, Y. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229 2013, 1–16. [Google Scholar]
Model Name | Color-Balancing Steps | Mapping Properties | |||
---|---|---|---|---|---|
Illum. Intensity | Illum. Color | Color | Mapping | Number | |
Compensation | Compensation | Mapping | Type | of Mappings | |
Device-raw (Equation (7)) | ○ | ○ | ○ | – | – |
Light-raw (Equation (8)) | ○ | ● | ○ | – | – |
Dcraw-srgb (Equation (6)) | ● | ◐ fixed for D65 | ● | Linear | 1 |
Linear-srgb (Equation (9)) | ● | ● | ● | Linear | 1 for each illum. |
Rooted-srgb (Equation (10)) | ● | ● | ● | Rooted 2nd-deg. poly. | 1 for each illum. |
Features | Device-Raw | Light-Raw | Dcraw-Srgb | Linear-Srgb | Rooted-Srgb |
---|---|---|---|---|---|
VGG-F | 87.81 | 90.09 | 93.23 | 96.25 | 95.83 |
VGG-M | 91.26 | 92.69 | 94.71 | 95.85 | 96.14 |
VGG-S | 90.36 | 92.64 | 93.54 | 96.83 | 96.65 |
VGG-M-2048 | 89.83 | 92.09 | 94.08 | 95.37 | 96.15 |
VGG-M-1024 | 88.34 | 90.92 | 93.74 | 94.31 | 94.92 |
VGG-M-128 | 82.52 | 85.99 | 87.35 | 90.17 | 90.97 |
AlexNet | 84.65 | 87.16 | 93.34 | 93.58 | 93.68 |
VGG-VD-16 | 91.15 | 94.68 | 95.79 | 98.23 | 97.93 |
VGG-VD-19 | 92.22 | 94.87 | 95.38 | 97.71 | 97.51 |
ResNet-50 | 97.42 | 98.92 | 98.67 | 99.28 | 99.52 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bianco, S.; Cusano, C.; Napoletano, P.; Schettini, R. Improving CNN-Based Texture Classification by Color Balancing. J. Imaging 2017, 3, 33. https://doi.org/10.3390/jimaging3030033
Bianco S, Cusano C, Napoletano P, Schettini R. Improving CNN-Based Texture Classification by Color Balancing. Journal of Imaging. 2017; 3(3):33. https://doi.org/10.3390/jimaging3030033
Chicago/Turabian StyleBianco, Simone, Claudio Cusano, Paolo Napoletano, and Raimondo Schettini. 2017. "Improving CNN-Based Texture Classification by Color Balancing" Journal of Imaging 3, no. 3: 33. https://doi.org/10.3390/jimaging3030033
APA StyleBianco, S., Cusano, C., Napoletano, P., & Schettini, R. (2017). Improving CNN-Based Texture Classification by Color Balancing. Journal of Imaging, 3(3), 33. https://doi.org/10.3390/jimaging3030033