Next Article in Journal
An IoT Smart Rodent Bait Station System Utilizing Computer Vision
Previous Article in Journal
Smart Parking System with Dynamic Pricing, Edge-Cloud Computing and LoRa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Assessment of Camouflage Effectiveness Based on Perceived Color Difference and Gradient Magnitude

National Laboratory of Colour Science and Engineering, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(17), 4672; https://doi.org/10.3390/s20174672
Submission received: 22 June 2020 / Revised: 5 August 2020 / Accepted: 16 August 2020 / Published: 19 August 2020
(This article belongs to the Section Intelligent Sensors)

Abstract

:
We propose a new model to assess the effectiveness of camouflage in terms of perceived color difference and gradient magnitude. The “image color similarity index” (ICSI) and gradient magnitude similarity deviation (GMSD) were employed to analyze color and texture differences, respectively, between background and camouflage images. Information entropy theory was used to calculate weights for each metric, yielding an overall camouflage effectiveness metric. During the analysis process, both spatial and color perceptions of the human visual system (HVS) were considered, to mimic real-world observations. Subjective tests were used to compare our proposed method with previous methods, and our results confirmed the validity of assessing camouflage effectiveness based on perceived color difference and gradient magnitude.

1. Introduction

Camouflage, which serves to blend objects into the background by using similar colors and patterns, has many applications in the fields of bionics and robotics, and for military purposes. Over the past few decades, elements of computer vision, statistical analysis, image processing, nanomaterials, human visual perception, and ergonomics have been introduced to camouflage research [1,2,3,4,5]. A good evaluation method to test the effectiveness of camouflage is very important—one which can provide an effective theoretical basis for camouflage research, predict the performance of camouflage in advance, and help to subsequently optimize the design of camouflage patterns.
Vision-based object detection techniques have been used to discriminate between objects and backgrounds, and thus can also be used to evaluate the effectiveness of camouflage [6,7,8]. These methods, which include the scale invariant feature transform (SIFT) [9], histogram of oriented gradient (HOG) [10], and local binary pattern (LBP) [11], have applications in a variety of fields, such as facial recognition, depth calculation, and three-dimensional reconstruction. When applying these detection methods, it is necessary to consider the characteristics of the human visual system (HVS), to ensure that camouflage is effective for humans. Analyses of the HVS and traditional image quality assessment algorithms have been combined, and the universal image quality index (UIQI) [12] and structural similarity index (SSIM) [13] could be used to obtain results that accord with actual human visual perception. The UIQI and SSIM have been applied to evaluations of camouflage effectiveness based on the similarity between the camouflage and the background. Additional metrics, such as gradient orientation, luminance, and phase congruency, have also been used [14]. However, the computation costs for such metrics can be very high, and they yield only small performance improvements. Gradient magnitude similarity deviation (GMSD) [14] decreases the computational costs and allows for accurate modeling of the HVS. GMSD effectively captures diverse local structures and describes texture patterns in detail. This method has already been applied to image processing and computer vision; however, to the best of our knowledge, it has not been applied to the study of camouflage. In background matching tasks, as well as pattern analysis, color similarities are commonly analyzed. Color matching is critical for survival for many animals in the wild. The methods described above are all based on the differences between the textures of the object and background, and use gray or red, green, and blue (RGB) images. However, RGB color models do not represent the colors actually observed by humans, unless a color appearance model and color management system are used [15].
CIE color space was introduced as a standard for application to displays, art design, color printing, etc. CIELUV and CIELAB, based on CIE color space, aim to achieve a uniform color space with consideration of differences in the perception of colors by the human eye. Perceptual differences between colors are indicated by the distance between points corresponding to individual colors in the color space; these differences can be used as references when assessing the similarity between an object and its background. It has been shown that color perception, in terms of lightness, chroma, and hue, is influenced by surrounding textures and colors. CIE recommends using CIELAB space to characterize colored surfaces and dyes. As the demand for accurate quantification of color differences has increased, more CIELAB color difference formulas have been developed, such as CMC (l:c) [16], BFD(l:c) [17], and CIEDE2000 [18]. CMC (l:c) and CIEDE2000 have been compared, and the camouflage similarity index (CSI) can be used to measure pixel-to-pixel color differences between a camouflaged object and its background using the CIELAB color system [19]. CSI performs well when used to discriminate an object from a large uniform background [20]. To process complex images, Spatial-CIELAB (S-CIELAB) [21] was developed from CIELAB space and takes into account sensitivity to color patterns; it may be superior for analyzing the similarity between a target object and background in assessments of camouflage effectiveness. Color and texture are both important in camouflage design. Therefore, a model that comprehensively evaluates both parameters will provide better results than one that only considers a single parameter.
In this paper, we present a new method for evaluating the effectiveness of camouflage, based on the analysis of color and textural differences between camouflaged objects and their backgrounds. For the color difference analysis, camouflaged objects and backgrounds are transformed into S-CIELAB space. Both spatial and color perceptions of the HVS are considered as the method is based on the way in which the human eye observes the real world. GMSD is applied to assess texture, as it has the advantage of a simple calculation process and yields results that can be related to actual human perception. We designed an experimental procedure using subjective tests to compare performance between the new method and previous ones.

2. Methods

2.1. Overview

Figure 1 shows our proposed camouflage effectiveness assessment method. Several typical background scenes (N) and camouflage images (M) are input into the model. Both color and texture are analyzed using the “image color similarity index” (ICSI) and GMSD. To calculate the ICSI, all the background and camouflage images are transformed from RGB to S-CIELAB space, and the color difference between each camouflage and background image is obtained based on the standard deviation of the distance in S-CIELAB space. Regarding GMSD, the gradient magnitudes of pairs of images are calculated, and the texture difference is expressed as the standard deviation of the difference in magnitude between the gradients of the camouflage and background images. During the calculation of the ICSI and GMSD, the resolutions of the camouflage and background images are equalized to facilitate comparison. As the ICSI and GMSD are calculated for image pairs, M × N color and texture difference matrices are obtained separately. The weights of each metric are determined using the information entropy method, and the final assessment of the effectiveness of each camouflage pattern is given by the weighted mean of the difference between the camouflage image and all background images. The algorithms of the ICSI, GMSD, and the information entropy method will be described in detail from Section 2.2, Section 2.3 and Section 2.4, respectively.

2.2. Image Color Simiarity Index (ICSI)

As mentioned above, S-CIELAB space considers both spatial and color perceptions of the HVS, which enables a more accurate color similarity analysis than other color spaces [21]. Before preprocessing image pairs using S-CIELAB, RGB input images should be transformed into device-independent CIE XYZ tristimulus values [22]. Then, both the camouflage and background image should be converted into the opponent color space. As the HSV can be treated as a low-pass filter from the perspective of imaging processing, low-pass filters for each color channel have been studied in detail in previous works [23]. After spatial filtering, the filtered images are transformed back into CIELAB space, which includes color channels, where L* represents lightness, a* is the distance from green to red and b* is the distance from blue to yellow.
The ICS, which is given by the distance in S-CIELAB color space between the i-th camouflaged object and the object-overlapping region of j-th background image, can be expressed for pixel (u, v) as follows.
I C S ( u , v ) = ( L i L j ) 2 + ( a i a j ) 2 + ( b i b j ) 2
The standard deviation of the color difference was calculated, as this provides a more comprehensive evaluation than the mean.
I C S I = 1 U × V u = 1 U v = 1 V ( I C S ( u , v ) 1 U × V u = 1 U v = 1 V I C S ( u , v ) ) 2

2.3. Gradient Magnitude Similarity Deviation (GMSD)

GMSD can be used for texture analysis [14]; we used the Sobel operator for this purpose. The GMSD of the i-th camouflage and j-th background image can be obtained in the horizontal and vertical directions, as shown in Equations (3)–(5):
S i ( u , v ) = ( s h I i ( u , v ) ) 2 + ( s v I i ( u , v ) ) 2
S j ( u , v ) = ( s h I j ( u , v ) ) 2 + ( s v I j ( u , v ) ) 2
s h = [ 1 0 1 2 0 2 1 0 1 ] , s v = [ 1 2 1 0 0 0 1 2 1 ]
where “⊗” denotes the convolution operation.
G M S ( u , v ) = ( 2 S i ( u , v ) S j ( u , v ) ) / ( S 2 i ( u , v ) S 2 j ( u , v ) )
Standard deviations were summed to compute the overall GMSD, as an index of the textural similarity between the camouflage and background images.
G M S D i j = 1 U V u = 1 u = U v = 1 v = V ( G M S ( u , v ) u = 1 u = U v = 1 v = V G M S ( u , v ) U V )

2.4. Calculation of Metrics Weights

Information entropy theory can be applied to quantify the amount of useful information contained within data. When the difference between the value of an indicator is high, the entropy is small. Weights are higher for parameters providing more information. As mentioned above, M × N color and texture difference matrices can be obtained using the ICSI and GMSD methods, respectively.
Using the ICSI and GMSD methods, the difference between the i-th camouflage and j-th background image is given by xijk, where k = 1 and 2 for the ICSI and GMSD, respectively. Thus, the proportion of the i-th camouflage pattern and j-th background pattern calculated using the k-th method can be expressed as follows:
p i j k = x i j k / i = 1 M j = 1 N x i j k
According to the entropy theory [24], the weight of each metric can be calculated using Equation (9), where the information entropy is defined as pijkln(pijk).
ω k = ( 1 + 1 l n ( M × N ) i = 1 M j = 1 N p i j k l n ( p i j k ) ) k = 1 2 ( 1 + 1 l n ( M × N ) i = 1 M j = 1 N p i j k l n ( p i j k ) )
Using the proposed model, the final camouflage effectiveness value is given by calculating the mean of the M values for color and texture using Equation (10).
v j = k = 1 2 i = 1 M ω k x i j k

3. Experimental Setup

To validate our proposed method and compare it to previous ones, we carried out an experiment using an ocean background and several common camouflage patterns. An effectiveness metric for each camouflage pattern was calculated using CSI, UIQI, and our method. To compare the performance of the methods, subjective tests were also carried out comparing the results to HVS assessments.
Figure 2 shows a schematic diagram of the experimental procedure used to validate the developed method. Ten subjects with normal color vision (according to the Ishihara 24-plate test), aged from 22 to 35 years, participated in the experiment. During the experimental process, the participants sat in front of a liquid crystal display (LCD); the set-up was adjusted to ensure a horizontal viewing angle. The distance between the user’s eyes and the screen was set to 55 cm. The background and camouflage images were displayed on the screen. The resolution of the images was 1920 × 1080 (the same as the screen resolution), and clipped camouflage images with a resolution of 300 × 100 were placed within the background image, as shown in Figure 2a. The background images were three images of the sea (under calm, “sparkling”, and rough conditions). Six common camouflage patterns were used, including US woodland, MultiCam, and MARPAT, which have also been used in previous studies. The background and camouflage images that we used are shown in the appendix in detail. The background images of the sea were used due to their complex patterns, although other camouflage patterns and background scenes could also be used in conjunction with our method.
During the evaluation process, a clipped camouflage image was placed randomly within a background scene. Each participant was required to locate the camouflage pattern. The “hit rate” and detection time were the performance indicators; task difficulty ratings were also obtained. The hit rate was defined as the percentage of trials on which the camouflaged target was correctly identified (indicated by clicking on it with the computer mouse), and the detection time was the time between the participant clicking the start button to begin the trial and subsequently clicking on the target (or quitting the trial). The task difficulty was rated on a seven-point rating scale at the end of each trial, ranging from 1 (easiest) to 7 (most difficult) [20,25]. Before the experiment, the experimental procedure was explained to all participants. The order of the presentation of stimuli was randomized to avoid order effects. When the participant hit the target or pressed the space bar to quit the trial, the task difficulty rating question appeared on the screen. The experiment had no time limit. There was a 5 min rest period in the middle of the trial to reduce the possibility of visual fatigue.

4. Results and Discussion

The Pearson correlation coefficient (PCC) was used to assess the correlations between the results of the objective methods (ICSI, GMSD, and the proposed model), and those of the CSI, UIQI, and the subjective HVS evaluation. Table 1 shows the PCCs of the different methods for hit rate, detection time, and task difficulty. To facilitate the data analysis, we used the absolute PCC values (ranging from 0 to 1), where higher values indicate stronger correlations and more accurate evaluation of camouflage effectiveness.
The PCCs for all three performance parameters (hit rate, detection time, and task difficulty) were higher for our proposed method than for the other methods. Hence, our method performed best in terms of evaluating camouflage effectiveness. This is confirmed by Figure 3, Figure 4 and Figure 5, wherein “x” represents the experimental data for hit rate, detection time, and task difficulty, respectively. In the figures, the proposed method (ICSI + GMSD) shows more linear trends than the other methods. The experimental data for our method are close to the predicted data (blue lines in the figures), and mostly fall within the confidence intervals (the area between the upper and lower curves). Thus, the PCC results showed that the camouflage effectiveness assessment method proposed in this paper was the best-performing method, with results that were consistent with actual human visual perception.

5. Conclusions

In this paper, we presented a method for assessing camouflage effectiveness based on perceived color difference and gradient magnitude metrics. Color and texture differences were analyzed while taking both spatial and color perceptions of the HVS into consideration. Color and texture differences were discussed in detail, along with weighting calculations. The PCC data confirmed the superiority of our method over existing ones. In future research, we will apply the proposed method to real-world environments with natural lighting and extend it to encompass the infrared spectrum.

Author Contributions

X.B. conceived and designed the experimental setup and algorithms; X.B. and W.W. performed the experiments; X.B. and N.L. analyzed the experimental data; X.B. wrote the original draft; X.B. and N.L. reviewed and edited the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

This work was supported by the National Key Research and Development Program of China (No. 2017YFB1002504) and National Nature Science Foundation of China (No. 61975012).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

This appendix provides the explanations of experimental details.

Appendix A.1. Background Collection

There are three background images of the sea under different environments that were used in the experiment, as shown in Figure A1. The sea has a different colour and wave texture in each photo. Figure A1a,b as background 1 and background 2 were taken at the Yellow Sea, China, using a Canon EOS 400D Mark II. In addition, an X-Rite ColorChecker Color Rendition Chart was used to create an ICC profile to be used in the experiment, making sure that the images were processed under the same color space and allowing the images to be displayed uniformly across all color devices. Figure A1c as background 3 was taken from the website of National Geographic [26]. The backgrounds were resized to 1920 pixels × 1080 pixels with the same size as the display used in the experiment.
Figure A1. Background with different condition, (a) calm; (b) sparkling; (c) rough.
Figure A1. Background with different condition, (a) calm; (b) sparkling; (c) rough.
Sensors 20 04672 g0a1

Appendix A.2. Camouflage Collection

Six commonly used camouflage patterns were obtained from a Web search. Because the design concept of navy uniform camouflage is the same as that of military weapons and equipment and seeks to maximize concealment, these six camouflage patterns were designed to be shaped in the form of a ship, using Adobe Photoshop CS6, as shown and introduced in Table A1.
Table A1. Six camouflage types.
Table A1. Six camouflage types.
TypeCamouflage PatternDesign Description
MTP Sensors 20 04672 i001Multi-Terrain Pattern (MTP) is used by British forces. The main variants of MTP are a four-color woodland pattern for use on webbing in all terrains. The design of MTP was intended to be used across a wide range of environments encountered.
Flecktarn Sensors 20 04672 i002Flecktarn was designed in the mid-1970s by the West German Army. The leopard-like pattern took Europe by storm in the same way that Woodland did in North America.
MARPAT Sensors 20 04672 i003MARPAT (Marine Pattern) was the United States Marine Corp’s first digital camouflage and was implemented throughout the entire Marine forces in 2001. The color scheme seeks to update the US Woodland pattern into a pixelated micropattern.
Woodland Sensors 20 04672 i004US Woodland was the Battle Dress Uniform pattern for almost all of the American armed forces from 1981 to 2006 and is still in use by almost a quarter of all militaries around the world. It is one of the most popular camouflages.
Multi-cam Sensors 20 04672 i005Multi-cam was designed to blend into any type of terrain, weather, or lighting condition. It is the all-season-tire of the camouflage world. Crye Precision developed this camouflage in 2003 for American troops in Afghanistan. This cutting-edge design is a favorite for more technical outfitters.
Type 07 Sensors 20 04672 i006Type 07 is a group of military uniforms introduced in 2007 and used by all branches of the People’s Liberation Army (PLA) of the People’s Republic of China (PRC).

Appendix A.3. Apparatus

Experimental stimuli were shown on a 27-inch EIZO FlexScan EV2450 with 1920 × 1080 pixels. Figure A2 illustrates the set-up of the experiment. The participants sat in front of the monitor. The chair was adjusted to ensure a horizontal viewing angle. The camouflaged ships inserted in the backgrounds were displayed on the monitor. The distance between the user’s eye and the screen was set to 55 cm, as shown in Figure A2. Each camouflaged ship randomly appeared three times in different locations in each background and resulted in total of 54 stimuli. We designed and carried out the visual experiment by using Visual Studio.
Figure A2. The experimental set-up of the visual experiment.
Figure A2. The experimental set-up of the visual experiment.
Sensors 20 04672 g0a2

References

  1. Morin, S.A.; Shepherd, R.F.; Kwok, S.W.; Stokes, A.A.; Nemiroski, A.; Whitesides, G.M. Camouflage and display for soft machines. Science 2012, 337, 828–832. [Google Scholar] [CrossRef] [Green Version]
  2. Song, W.T.; Zhu, Q.D.; Huang, T.; Liu, Y.; Wang, Y.T. Volumetric display based on multiple mini-projectors and a rotating screen. Opt. Eng. 2015, 54, 013103. [Google Scholar] [CrossRef]
  3. Volonakis, T.N.; Matthews, O.E.; Liggins, E.; Baddeley, R.J.; Scott-Samuel, N.E.; Cuthill, I.C. Camouflage assessment: Machine and human. Comput. Ind. 2018, 99, 173–182. [Google Scholar] [CrossRef] [Green Version]
  4. Copeland, A.C.; Trivedi, M.M. Computational models for search and discrimination. Opt. Eng. 2001, 40, 1885–1896. [Google Scholar] [CrossRef] [Green Version]
  5. Troscianko, T.; Benton, C.P.; Lovell, P.G.; Tolhurst, D.J.; Pizlo, Z. Animal camouflage and visual perception. Philos. Trans. Roy. Soc. B 2009, 364, 449–461. [Google Scholar] [CrossRef]
  6. Nyberg, S.; Bohman, L. Assessing camouflage methods using textural features. Opt. Eng. 2001, 40, 60–71. [Google Scholar] [CrossRef] [Green Version]
  7. Song, J.; Liu, L.; Huang, W.; Li, Y.; Chen, X.; Zhang, Z. Target detection via HSV color model and edge gradient information in infrared and visible image sequences under complicated background. Opt. Quant. Electron. 2018, 50, 175. [Google Scholar] [CrossRef]
  8. González, A.; Vázquez, D.; López, A.M.; Amores, J. On-board object detection: Multicue, multimodal, and multiview random forest of local experts. IEEE Tran. Cybern. 2016, 47, 1–11. [Google Scholar] [CrossRef]
  9. Bosch, A.; Zisserman, A.; Muñoz, X. Scene classification using a hybrid generative/discriminative approach. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 712–727. [Google Scholar] [CrossRef] [Green Version]
  10. Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [Google Scholar] [CrossRef] [Green Version]
  11. Ahonen, T.; Hadid, A.; Pietikäinen, M. Face recognition with local binary patterns. In Proceedings of the 8th European Conference on Computer Vision (ECCV), Prague, Czech, 11–14 May 2004; Pajdla, T., Matas, J., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; pp. 469–481. [Google Scholar] [CrossRef] [Green Version]
  12. Lin, C.J.; Chang, C.C.; Liu, B.S. Developing and evaluating a target background similarity metric for camouflage detection. PLoS ONE 2014, 9, e87310. [Google Scholar] [CrossRef] [PubMed]
  13. Patil, K.V.; Pawar, K.N. Method for improving camouflage image quality using texture analysis. Int. J. Comput. Appl. 2017, 180, 6–8. [Google Scholar] [CrossRef]
  14. Xue, W.; Zhang, L.; Mou, X.; Bovik, A.C. Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. IEEE Trans. Image Process. 2014, 23, 684–695. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Gentile, R.S. Device-independent color in PostScript. In Proceedings of the Human Vision, Visual Processing, & Digital Display IV, IS&T/SPIE’s Symposium on Electronic Imaging: Science and Technology; Jan, P.A., Bernice, E.R., Eds.; Proc. SPIE 1913, San Jose, CA, USA, 31 January–5 February 1993; pp. 419–432. [Google Scholar] [CrossRef]
  16. McDonald, R. Acceptability and perceptibility decisions using the CMC color difference formula. Text. Chem. Color 1988, 20, 31–37. [Google Scholar] [CrossRef]
  17. Luo, M.R.; Rigg, B. BFD (l:c) colour-difference formula. Part 1—Development of the formula. J. Soc. Dyers Colour. 1987, 103, 86–94. [Google Scholar] [CrossRef]
  18. Luo, M.R.; Cui, G.H.; Rigg, B. The development of the CIE 2000 color difference formula: CIEDE2000. Color Res. Appl. 2001, 26, 340–350. [Google Scholar] [CrossRef]
  19. Lin, C.J.; Chang, C.C.; Lee, Y.H. Developing a similarity index for static camouflaged target detection. Imaging Sci. J. 2013, 62, 337–341. [Google Scholar] [CrossRef]
  20. Lin, C.J.; Prasetyo, Y.T.; Siswanto, N.D.; Jiang, B.C. Optimization of color design for military camouflage in CIELAB color space. Color Res. Appl. 2019, 44, 367–380. [Google Scholar] [CrossRef]
  21. Johnson, G.M.; Fairchild, M.D. A top down description of S-CIELAB and CIEDE2000. Color Res. Appl. 2003, 28, 425–435. [Google Scholar] [CrossRef]
  22. Kwak, Y.; MacDonald, L. Characterization of a desktop LCD projector. Displays 2000, 21, 179–194. [Google Scholar] [CrossRef]
  23. Poirson, A.B.; Wandell, B.A. Appearance of colored patterns: Pattern-color separability. J. Opt. Soc. Am. 1993, 10, 2458–2470. [Google Scholar] [CrossRef] [PubMed]
  24. Zou, Z.-H.; Yun, Y.; Sun, J.-N. Entropy method for determination of weight of evaluating indicators in fuzzy synthetic evaluation for water quality assessment. J. Environ. Sci. 2006, 18, 1020–1023. [Google Scholar] [CrossRef]
  25. Pomplun, M.; Reingold, E.M.; Shen, J. Investigating the visual span in comparative search: The effects of task difficulty and divided attention. Cognition 2001, 81, B57–B67. [Google Scholar] [CrossRef] [Green Version]
  26. Available online: https://www.nationalgeographic.org/encyclopedia/ocean/ (accessed on 27 February 2020).
Figure 1. Schematic diagram of the proposed model for assessing camouflage effectiveness: (a) Input data of the camouflage object image and the objective-overlapping region of the background image to the model; (b) the algorithm of image color similarity index (ICSI); (c) The algorithm of gradient magnitude similarity deviation (GMSD); (d) Determine the weights of each metric using the information entropy method.
Figure 1. Schematic diagram of the proposed model for assessing camouflage effectiveness: (a) Input data of the camouflage object image and the objective-overlapping region of the background image to the model; (b) the algorithm of image color similarity index (ICSI); (c) The algorithm of gradient magnitude similarity deviation (GMSD); (d) Determine the weights of each metric using the information entropy method.
Sensors 20 04672 g001
Figure 2. Schematic of the experimental procedure used to validate the developed method: (a) the camouflaged image showed in the background on the screen; (b) the workflow of each trail.
Figure 2. Schematic of the experimental procedure used to validate the developed method: (a) the camouflaged image showed in the background on the screen; (b) the workflow of each trail.
Sensors 20 04672 g002
Figure 3. The correlations of the (a) camouflage similarity index (CSI), (b) universal image quality index (UIQI) and (c) image color similarity index and gradient magnitude similarity deviation (ICSI + GMSD) for hit rate.
Figure 3. The correlations of the (a) camouflage similarity index (CSI), (b) universal image quality index (UIQI) and (c) image color similarity index and gradient magnitude similarity deviation (ICSI + GMSD) for hit rate.
Sensors 20 04672 g003
Figure 4. The correlations of the (a) CSI, (b) UIQI and (c) ICSI + GMSD for detection time.
Figure 4. The correlations of the (a) CSI, (b) UIQI and (c) ICSI + GMSD for detection time.
Sensors 20 04672 g004
Figure 5. The correlations of the (a) CSI, (b) UIQI and (c) ICSI + GMSD for task difficulty.
Figure 5. The correlations of the (a) CSI, (b) UIQI and (c) ICSI + GMSD for task difficulty.
Sensors 20 04672 g005
Table 1. Pearson correlation coefficients of the camouflage effectiveness evaluation methods for various performance parameters.
Table 1. Pearson correlation coefficients of the camouflage effectiveness evaluation methods for various performance parameters.
IndexHit Rate (%)Detection Time (Second)Task Difficulty
UIQI0.62400.65780.6882
CSI0.75860.80280.8029
GMSD0.69910.74980.7952
ICSI0.68070.68870.7803
ICSI + GMSD0.78210.80870.8637
A p-value < 0.05 was taken to indicate statistical significance.

Share and Cite

MDPI and ACS Style

Bai, X.; Liao, N.; Wu, W. Assessment of Camouflage Effectiveness Based on Perceived Color Difference and Gradient Magnitude. Sensors 2020, 20, 4672. https://doi.org/10.3390/s20174672

AMA Style

Bai X, Liao N, Wu W. Assessment of Camouflage Effectiveness Based on Perceived Color Difference and Gradient Magnitude. Sensors. 2020; 20(17):4672. https://doi.org/10.3390/s20174672

Chicago/Turabian Style

Bai, Xueqiong, Ningfang Liao, and Wenmin Wu. 2020. "Assessment of Camouflage Effectiveness Based on Perceived Color Difference and Gradient Magnitude" Sensors 20, no. 17: 4672. https://doi.org/10.3390/s20174672

APA Style

Bai, X., Liao, N., & Wu, W. (2020). Assessment of Camouflage Effectiveness Based on Perceived Color Difference and Gradient Magnitude. Sensors, 20(17), 4672. https://doi.org/10.3390/s20174672

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop