Next Article in Journal
A Robust and Energy-Efficient Weighted Clustering Algorithm on Mobile Ad Hoc Sensor Networks
Previous Article in Journal
Revisiting Chameleon Sequences in the Protein Data Bank
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Color-Based Image Retrieval Using Proximity Space Theory

1
College of Science, Dalian Maritime University, Dalian 116026, China
2
School of Automation, Shenyang Aerospace University, Shenyang 110136, China
*
Authors to whom correspondence should be addressed.
Algorithms 2018, 11(8), 115; https://doi.org/10.3390/a11080115
Submission received: 30 June 2018 / Revised: 22 July 2018 / Accepted: 26 July 2018 / Published: 28 July 2018

Abstract

:
The goal of object retrieval is to rank a set of images by their similarity compared with a query image. Nowadays, content-based image retrieval is a hot research topic, and color features play an important role in this procedure. However, it is important to establish a measure of image similarity in advance. The innovation point of this paper lies in the following. Firstly, the idea of the proximity space theory is utilized to retrieve the relevant images between the query image and images of database, and we use the color histogram of an image to obtain the Top-ranked colors, which can be regard as the object set. Secondly, the similarity is calculated based on an improved dominance granule structure similarity method. Thus, we propose a color-based image retrieval method by using proximity space theory. To detect the feasibility of this method, we conducted an experiment on COIL-20 image database and Corel-1000 database. Experimental results demonstrate the effectiveness of the proposed framework and its applications.

1. Introduction

Image retrieval is the procedure of retrieving relevant images from a big database of images, which usually occurs in one of two ways: text-based image retrieval or content-based image retrieval. Text-based image retrieval describes an image by using one or more keywords, but manual annotation has a huge cost burden. To address this defect, content-based image retrieval is introduced in [1]. Content-based image retrieval differs from traditional retrieval methods, as it uses visual contents such as key points, colors and textures to retrieve similar images from large scale image databases. Content-based image retrieval has been widely employed in several research fields, such as machine vision, artificial intelligence, signal processing, etc. Any content-based image retrieval system has two steps that can be explained as follows. The first step is feature extraction; the key point in this step is to find a method of feature extraction to precisely describe the content of image. Color features, which are the basic characteristic of the image content, are widely used in image retrieval [2,3,4,5]. For example, in [6], the color features was extracted in HSV color space by using color moment method. Gopal et.al. considered that only using grayscale information is imprecise in the image matching [7], because some images have same garyscale but have different color information, therefore they put color information into SURF descriptor making the precision better than only used SURF [8]. The second step is similarity measurement; in this step, the query image is selected, and the similarities between it and images of database are computed. In recent years, many different similarity computation methods have been proposed. Similarity is obtained by calculating the distance between two feature vectors, such as Euclidean, Manhattan and Chi square distance [9]. Chen et al. calculated similarities between the query image and images in database by comparing the coding residual [10]. Kang et al. proposed a similarity measure that is a sparse representation issue via matching the feature dictionaries between the query image and images of database [11]
With the rapid development of modern computer technology and soft computing theory, many modern algorithms and theories are applied to the image segmentation and image retrieval, which include fuzzy sets [12], rough set theory [13,14,15,16,17,18,19,20], near sets [21], mathematical morphology, neural network, etc. Computational Proximity (CP) [21] is a novel method to deal with image processing, which can identify nonempty sets of points that are either close to each other or far apart by comparing descriptions of pixels points or region in digital images or videos. Pták and Kropatsch [22] were the first to combine nearness in digital images with proximity spaces. In CP, pixel is an object, and the various features of pixel, such as grayscale, color, texture and geometry, are regarded as its attributes. For a point x, Φ ( x ) denotes the feature vector of x, then
Φ ( x ) = ( ϕ 1 ( x ) , , ϕ i ( x ) , , ϕ n ( x ) ) , i { 1 , , n } .
Similarity between either points or regions in CP is defined in terms of the feature vectors that describe either the points or the regions. Feature vectors are expressed by introducing probe function. A probe ϕ maps a geometric object to a feature value that is a real number [21]. The similarity of a pair of nonempty sets is determined by the number of the same element. Put another way, nonempty sets are strongly near, provided sets have at least one point in common (introduced in [23]).
Example 1.
[21] Let A , B , C be a nonempty sets of points in a picture, Φ = { r , g , b } , where r ( A ) equals the intensity of the redness of A, g ( A ) equals the intensity of the greenness of A and b ( A ) equals the intensity of the blueness of A. Then,
Φ ( A ) = ( 1 ,   1 ,   1 ) ( d e s c r i p t i o n   o f   A ) Φ ( B ) = ( 0 ,   0 ,   1 ) ( d e s c r i p t i o n   o f   B ) Φ ( C ) = ( 0 ,   1 ,   1 ) ( d e s c r i p t i o n   o f   C ) Φ ( D ) = ( 0.4 ,   0.4 ,   0.4 ) ( d e s c r i p t i o n   o f   D ) Φ ( E ) = ( 0.4 ,   0.2 ,   0.3 ) ( d e s c r i p t i o n   o f   E ) .
From this, Region A resembles Region B, since both regions have equal blue intensity. Region A strongly resembles (is strongly near) Region C, since both regions have equal green and blue intensities. By contrast, the description of Region A is far from the descriptions of Region D or E, since Regions A and D or E do not have equal red or green or blue intensities.
From the above mentioned example, the description of an image is defined by a feature vector, which contains feature values that are real numbers (values are obtained by using probes, e.g., color brightness, and gradient orientation). It can be found that there are different feature values for different values. However, only several features play a distinguishing role for a object (set), so it is necessary to consider the order of features in matching description of another object. Specifically, for any nonempty set of points in an image, first we transform RGB color space into HSV (Hue, Saturation, Value) color space, and then 72 kinds of color features will be obtained by the color quantization. The HSV color space, which is closer to the human eye, is selected as the color feature representation of the image. Because human eye is limited to distinguish colors and if the range of each color component is too large, feature extraction will become very difficult. Thus, it is necessary to reduce the amount of computation and improve efficiency by quantifying the HSV color space component [24], and several dominant colors [25] are chosen, which can effectively match the image and reduce the computation complexity.

2. Proposed Method

This part presents the proposed method from two aspects, first is the transformation from RGB color space into HSV color space, and 72 kinds of color features will be obtained by the color quantization in HSV color space. Then, the similarity of images is identified by using proximity space theory. Second, we put forward an improved dominance granule structure formula as a similarity calculation method of proximity sets.

2.1. Feature Extraction

RGB color space and HSV color space are often used in digital image processing, and RGB color space is the basic color space that can be represented by a cube model. Red, green and blue are called the three primary colors and any point corresponds to a color in the RGB color model. HSV color space is established based on human visual perception characteristics and it can be represented by cone model. “Hue” means different color, “saturation” means the intensity of the color, whose value is in the unit interval [0,1], and “value” means the degree of shade of color. Circumferential color of the top of cone is pure color, and the top of the cone corresponds to V = 1 , which contains three surfaces (R = 1, G = 1, and B = 1) in RGB color space. The conversion formula from RGB color space to HSV color space is given by:
v = max ( r , g , b ) , s = v min ( r , g , b ) v ,
r = v r + σ v min ( r , g , b ) + σ , g = v g + σ v min ( r , g , b ) + σ , b = v b + σ v min ( r , g , b ) + σ ,
where σ 0 , v = v / 255 and r, g, and b are the red, green and blue coordinates of a color, respectively, in RGB color space.
h = ( 5 + b ) , r = max ( r , g , b ) a n d g = min ( r , g , b ) ( 1 g ) , r = max ( r , g , b ) a n d g min ( r , g , b ) ( 1 + r ) , g = max ( r , g , b ) a n d b = min ( r , g , b ) ( 3 b ) , g = max ( r , g , b ) a n d b min ( r , g , b ) ( 3 + g ) , b = max ( r , g , b ) a n d r = min ( r , g , b ) ( 5 r ) , b = max ( r , g , b ) a n d r min ( r , g , b ) ,
h = 60 × h ,
where h [ 0 , , 360 ] and s , v [ 0 , , 1 ] are the hue, saturation and value coordinates of a color, respectively, in HSV color space. Then, according to the property of the HSV color space, we can obtain 72 kinds of colors by the non-uniform quantization method, which can reduce the dimension of the color feature vectors and the amount of calculation. The quantization formula is described as follows:
H = 0 316 h 360 o r 0 h 20 1 21 h 40 2 41 h 75 3 76 h 155 4 156 h 190 5 191 h 270 6 271 h 295 7 296 h 315 ,
S = 0 , 0 s 0.2 1 , 0.2 < s 0.7 2 , 0.7 < s 1 , V = 0 , 0 v 0.2 1 , 0.2 < v 0.7 2 , 0.7 < v 1 ,
Based on the above quantization method, the three-dimensional feature vector is mapped into a one-dimensional feature vector by using the equation: l = h Q s Q v + s Q v + v , where Q s and Q v are set to 3 because saturation and value are divided into three levels, respectively. Thus, the above equation can be rewritten as
l = 9 h + 3 s + v .
According to the value range of h, s, and v, we can obtain the range of l = [ 0 , 1 , , 71 ] , and then we can get 72 kinds of color features.

2.2. Similarity Measure Based on Dominance Perceptual Information

Feature extraction and similarity measurement are two important steps in image retrieval. In this paper, color feature is selected as the retrieval feature, and we quantify color features into 72 numbers, Φ = { 0 , 1 , , 71 } . If an image X has T pixels, and there are Q pixels with the color feature c in an image, ϕ c ( X ) = Q T and c Φ . For example, c = 0 , ϕ 0 ( X ) represents the probability that the color feature equals 0 in the image X, ϕ 1 ( X ) represents the probability that the color feature equals 1 in the image X, and ϕ 71 ( X ) represents the probability that the color feature equals 71 in the image X; then, we can obtain that Φ ( X ) = { ϕ 0 ( X ) , ϕ 1 ( X ) , , ϕ 71 ( X ) } , using which we can get a color histogram (see Figure 1) of a query image. The Top-m color features Φ m ( X ) is obtained by descending order of Φ ( X ) , Φ m ( X ) = { l 1 x , l 2 x , , l m x } , l i Φ = { 0 , 1 , 2 , , 71 } .
In this paper, we take Top-m dominant color features in the HSV space to match images to improve the efficiency of retrieval; therefore, we use color features vector Φ m ( X ) as the distinguishing description of image X. In addition, image similarity measurement is also important, such as Euclidean distance formula or cosine formula. However, if we use Euclidean distance formula or cosine formula to compute the similarity between the two images, in essence, it will lose significance. For example, the similarity between matrix [ 1 , 2 , 3 ] and matrix [ 2 , 4 , 6 ] is equal to 1 by cosine formula. Inspired by dominance rough set theory, we propose a novel dominance granule structure similarity method as similarity measurements, defined as follows:
Definition 1.
Let Φ m ( X ) = { l 1 x , l 2 x , , l m x } , Φ m ( Y ) = { l 1 y , l 2 y , , l m y } and Φ m ( X ) , Φ m ( Y ) Φ , where X and Y represent the query image and the image of image database, respectively; and a x and a y represent the color attribute of X and Y, respectively. A is a set of all pre-order relationships on Φ m ( X ) and Φ m ( Y ) . a x , a y A [26], Φ m ( X ) / a x = { [ l 1 x ] a x , [ l 2 x ] a x , , [ l m x ] a x } and Φ m ( Y ) / a y = { [ l 1 y ] a y , [ l 2 y ] a y , , [ l m y ] a y } are the corresponding dominance granule structure, then the similarity between X and Y can be denoted by
S ( Φ / a x , Φ / a y ) = 1 1 | Φ | 1 i = 1 | Φ | | [ l i ] a x Δ [ l i ] a y | | Φ | .
because Equation (8) needs to calculate the global color feature similarity, the efficiency is reduced due to the large amount of calculation. Therefore, the Top-m color features of the image are selected for image matching, and Equation (8) is improved as follows:
S ( Φ m ( X ) / a x , Φ m ( Y ) / a y ) = 1 1 m i = 1 m min j | [ l i x ] a x Δ [ l j y ] a y | m , j { 1 , 2 , , m } ,
where | Φ | is the number of all the elements in set Φ, | [ l i x ] a x Δ [ l j y ] a y | = | [ l i x ] a x [ l j y ] a y | | [ l i x ] a x [ l j y ] a y | , and [ l i ] a x = { l j | l j a x l i , 1 j m } , [ l i ] a y = { l j | l j a y l i , 1 j m } . The ordinal relation between objects in terms of the attribute a x or a y is denoted by ⪰, and l 1 a x l 2 or l 1 a y l 2 means that l 1 is at least as good as l 2 with respect to the attribute a x or a y .
Example 2.
To prove the feasibility of this method, we give an applied instance. Suppose m = 3 , Φ 3 ( I ) = ( 7 , 9 , 1 ) (description of query image I), where 7, 9 and 1 refer to the color features by quantization and ϕ 7 ( I ) > ϕ 9 ( I ) > ϕ 1 ( I ) . Let Φ 3 ( I 1 ) (description of I 1 ), Φ 3 ( I 2 ) (description of I 2 ), Φ 3 ( I 3 ) (description of I 3 ), etc represent the set of Top-3 color features of each image in image database, respectively. Thus, the similarity can be obtained by Equation (9) and results of the similarity are shown by Equation(10). I 1 , I 2 , ⋯, and I 13 represent images in the image database.
According to Equation (9), if 7 a x 9 a x 1 , 7 a y 1 a y 9 ,
[ 7 ] a x = { 7 }        [ 7 ] a y = { 7 }
[ 9 ] a x = { 7 , 9 }        [ 1 ] a y = { 7 , 1 }
[ 1 ] a x = { 7 , 9 , 1 } ,        [ 9 ] a y = { 7 , 1 , 9 } ,
Φ 3 ( I ) / a x = { 7 } , { 7 , 9 } , { 7 , 9 , 1 } , Φ 3 ( I 2 ) / a y = { 7 } , { 7 , 1 } , { 7 , 1 , 9 } ,
S ( Φ m ( I ) / a x , Φ m ( I 2 ) / a y ) = 1 1 3 i = 1 3 min j | [ l i x ] a x Δ [ l j y ] a y | 3 = 1 1 3 × ( 0 3 + 1 3 + 0 3 ) = 1 1 3 × 1 3 = 8 9 0.8 ,
S i m = 1 , i f Φ 3 ( I 1 ) = ( 7 , 9 , 1 ) 0.8 , i f Φ 3 ( I 2 ) = ( 7 , 1 , 9 ) 0.8 , i f Φ 3 ( I 3 ) = ( 9 , 7 , 1 ) 0.8 , i f Φ 3 ( I 4 ) = ( 1 , 7 , 9 ) 0.8 , i f Φ 3 ( I 5 ) = { ( 7 , 9 , l 3 ) | l 3 Φ 3 ( I ) } 0.7 , i f Φ 3 ( I 6 ) = { ( 7 , 1 , l 3 ) | l 3 Φ 3 ( I ) } 0.6 , i f Φ 3 ( I 7 ) = ( 9 , 1 , 7 ) 0.6 , i f Φ 3 ( I 8 ) = ( 1 , 9 , 7 ) 0.6 , i f Φ 3 ( I 9 ) = { ( 7 , l 2 , l 3 ) | l 2 , l 3 Φ 3 ( I ) } 0.5 , i f Φ 3 ( I 10 ) = { ( 9 , 1 , l 3 ) | l 3 Φ 3 ( I ) } 0.4 , i f Φ 3 ( I 11 ) = { ( 9 , l 2 , l 3 ) | l 2 , l 3 Φ 3 ( I ) } 0.2 , i f Φ 3 ( I 12 ) = { ( 1 , l 2 , l 3 ) | l 2 , l 3 Φ 3 ( I ) } 0 , i f Φ 3 ( I 13 ) = { ( l 1 , l 2 , l 3 ) | l 1 , l 2 , l 3 Φ 3 ( I ) } .
As shown in Equation (10), we can see that the color feature’s order of I 1 is the same as the color feature’s order of I, so the similarity between I 1 and I is 1. Image I 5 strongly resembles (is strongly near) image I better than image I 6 , since the color feature’s order of I 5 is near to the color feature’s order of I. The image I 13 does not resemble image I, since both color features are different, so the similarity between I 13 and I is 0. Therefore, it makes it easier to match images if there exist more identical elements between color features. Simultaneously, it is also critical whether corresponding elements between features vector are equal, such as I 6 and image I 7 . The similarity measure proposed in this paper can improve the subjective and objective consistency of retrieval results.

2.3. Algorithm Flow Chart

As discussed above, the framework of our proposed method can be shown in Figure 2. The main procedures of the proposed framework are summarized as follows:
Step 1: A query image in the RGB value (r,g,b) by nonlinear transformation converts to the HSV space value (h,s,v).
Step 2: We get 72 kinds of colors by the quantization.
Step 3: We obtain the color histogram of a query image and take the Top-m dominant color features forming the set of Φ m ( X ) . We do the same with each image in the image database and get the set of Φ m ( Y ) .
Step 4: We use our proposed method for similarity measurements to calculate the similarity between the query image and the image of image database.
Experimental results with Top-20 images are shown in Figure 3 by using proposed similarity measurement, and the query image is the first one in each category.

3. Experiment and Results

In this section, experimental results of the proposed method on COIL-20 [7] image database and Corel-1000 [10] database are presented. The COIL-20 database involves 1440 images, in which the images are separated into twenty categories (see Figure 4), and every category includes 72 images. The size of each image is 128 × 128 . Table 1 shows the results of enhanced-surf method and our method, respectively, on COIL-20 database. O R is the number of relevant images retrieved, while O T is the total number of images retrieved. Thus, the precision and the recall of this retrieval system are given by Equation(11).
P r e c i s i o n = O R O T , R e c a l l = O R 72 .
According to the results in Table 1b, we can get the average precision is 0.4818 and the average recall is 0.8387 by calculating the mean value of the third and fourth line, respectively. Compared with the enhanced SURF, these results demonstrate that the average precision and the average recall of our method increase by 16.77 % and 28.61 % , respectively.
The Corel-1000 database includes 1000 images from ten categories (African tribes, Beaches, Buildings, Buses, Dinosaurs, Elephants, Flowers, Horses, Mountains, and Foods) (see Figure 5).
Each category contains 100 images with size of 384 × 256 or 256 × 384 . Each image in this database is taken to query. Table 2 shows the precision and recall results of our proposed method compared with CLD [27], Color Moment [28] and CSD [29].
Precision is computed from the Top-10 retrieved images and recall is calculated from the Top-100 retrieved images. CLD takes spatial information into account, but fails to find rotated or translated images that are similar to query images. Color moment has nine color components, with three low-order moments for each color channel, but low-order moments are less discriminating, so it is often used in combination with other features. CSD is mainly used for static image retrieval, and only considers whether the structure contains a certain color, and not about its frequency.
Table 2 shows that the average precision of our experimental results are 17.4 % , 16.1 % and 5.6 % higher than CLD, color moment and CSD, respectively; the average recall of our experimental results are 9.4 % , 5.5 % and 2.5 % higher than CLD, color moment and CSD, respectively; and our method retrieves buildings category images, the elephants category images and foods category images better than the other algorithms. For example, buildings images can be more precisely retrieved by using our method, and it can be seen that the average precision of our experimental results are 32.3 % , 28.2 % and 11.1 % higher than CLD, color moment and CSD, respectively; and the average recall of our experimental results are 24 % , 18.6 % and 5.6 % higher than CLD, color moment and CSD, respectively. The precision of dinosaurs category and flower category are highly retrieved in our proposed algorithm. Table 3 shows the precision rate of retrieving the Top-20 database images for each query image, while some retrieved images in the dinosaurs category, horses category and foods category are shown in Figure 6.

4. Conclusions

This paper provides a novel image similarity measurement method, which uses the improved dominance granule structure formula as a similarity calculation method of proximity sets. By simulating public datasets and comparing with other methods (CLD, Color Moment and CSD), it was found that the image retrieval based on this method produces a better retrieval result. The proposed method is meaningful to improve the retrieval accuracy, and provides a new attempt in the field of image retrieval.

Author Contributions

Conceptualization, J.W. and L.W.; Methodology, J.W.; Software, J.W.; Validation, J.W. and L.W.; Formal Analysis, J.W.; Writing-Original Draft Preparation, J.W.; Writing-Review & Editing, J.W., L.W., X.L. and Y.R.; Visualization, J.W.; Supervision, L.W., X.L., Y.R. and Y.Y.

Funding

This work was partially supported by Natural Science Foundations of China under Grant Nos. 61602321, 61773352, Aviation Science Foundation with No. 2017ZC54007, Science Fund Project of Liaoning Province Education Department with No. L201614, Natural Science Fund Project of Liaoning Province with No. 20170540694, the Fundamental Research Funds for the Central Universities (Nos. 3132018225, 3132018228) and the Doctor Startup Foundation of Shenyang Aerospace University13YB11.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Amores, J.; Sebe, N.; Radeva, P.; Gevers, T.; Smeulders, A. Boosting Contextual Information in Content-Based Image Retrieval. In Proceedings of the 6th ACM SIGMM International Workshop on Multimedia Information Retrieval, New York, NY, USA, 15–16 October 2004; pp. 31–38. [Google Scholar]
  2. Zhang, Y.P.; Zhang, S.B.; Yan, Y.T. A Multi-view Fusion Method for Image Retrieval. In Proceedings of the International Congress on Image and Signal Processing, BioMedical Engineering and Information, Shanghai, China, 14–16 October 2017; pp. 379–383. [Google Scholar]
  3. Chen, Y.Y. The Image Retrieval Algorithm Based on Color Feature. In Proceedings of the IEEE International Conference on Software Engineering and Service Science, Beijing, China, 26–28 August 2016; pp. 647–650. [Google Scholar]
  4. Liang, C.W.; Chung, W.Y. Color Feature Extraction and Selection for Image Retrieval. In Proceedings of the IEEE International Conference on Advanced Materials for Science and Engineering, Tainan, Taiwan, 12–13 November 2016; pp. 589–592. [Google Scholar]
  5. Li, M.Z.; Jiang, L.H. An Improved Algorithm Based on Color Feature Extraction for Image Retrieval. In Proceedings of the International Conference on Intelligent Human-Machine Systems and Cybernetics, Hangzhou, China, 25–26 August 2016. [Google Scholar]
  6. Alhassan, A.K.; Alfaki, A.A. Color and Texture Fusion-Based Method for Content-Based Image Retrieval. In Proceedings of the International Conference on Communication, Control, Computing and Electronics, Khartoum, Sudan, 16–18 Janary 2017; pp. 1–6. [Google Scholar]
  7. Gopal, N.; Bhooshan, R.S. Content Based Image Reterieval using Enhanced SURF. In Proceedings of the Fifth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics, Patna, India, 16–19 December 2015; pp. 1–4. [Google Scholar]
  8. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef] [Green Version]
  9. Kaur, M.; Sohi, N. A Novel Techinique for Content Based Image Retrieval Using Color, Texture and Edge Features. In Proceedings of the International Conference on Communication and Electronics System, Coimbatore, India, 21–22 October 2016; pp. 1–7. [Google Scholar]
  10. Chen, Q.S.; Ding, Y.Y.; Li, H.; Wang, X.; Wang, J.; Deng, X. A Novel Multi-Feature Fusion and Sparse Coding Based Framework for Image Retrieval. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics Octobor, San Diego, CA, USA, 5–8 October 2014; pp. 2391–2396. [Google Scholar]
  11. Kang, L.W.; Hsu, C.Y.; Chen, H.W.; Lu, C.S.; Lin, C.Y. Feature-Based Sparse Representation for Image Similarity Assessment. IEEE Trans. Multimed. 2011, 13, 1019–1030. [Google Scholar] [CrossRef] [Green Version]
  12. Bakhshali, M.A. Segmentation and Enhancement of Brain MR Images using Fuzzy Clustering Based on Information Theory. Soft Comput. 2016, 21, 6633–6640. [Google Scholar] [CrossRef]
  13. Pawlak, Z. Rough sets, Proceedings of the 6th ACMSIGMM. Int. J. Comput. Inform. Sci. 1995, 256–341. [Google Scholar]
  14. Lim, K.Y.; Rajeswari, M. Segmenting Object with Ambiguous Boundary using Information Theoretic Rough Sets. Int. J. Electron. Commun. 2017, 77, 50–56. [Google Scholar] [CrossRef]
  15. Senthilkumaran, N.; Rajesh, R. Brain Image Segmentation using Granular Rough Sets. Int. J. Arts Sci. 2009, 3, 69–78. [Google Scholar]
  16. Roy, P.; Goswami, S.; Chakraborty, S.; Azar, A.T.; Dey, N. Image Segmentation using Rough Set Theory: A Review. Int. J. Rough Sets Data Analy. 2014, 1, 62–74. [Google Scholar] [CrossRef]
  17. Khodaskar, A.A.; Ladhake, S.A. Emphasis on Rough Set Theory for Image Retrieval. In Proceedings of the 6th ACM SIGMM International Conference on Computer Communication and Informatics, Coimbatore, India, 8–10 Janary 2015; pp. 1–6. [Google Scholar]
  18. Thilagavathy, C.; Rajesh, R. Rough Set Theory and its Applications Image Processing-A Short Review. In Proceedings of the 3rd World Conference on Applied Sciences, Engineering, Technology, Kathmandu, Nepal, 27–29 September 2014; pp. 27–29. [Google Scholar]
  19. Rocio, L.M.; Raul, S.Y.; Victor, A.R.; Alberto, P.R. Improving a Rough Set Theory-Based Segmentation Approach using Adaptable Threshold Selection and Perceptual Color Spaces. J. Electron. Imag. 2014, 23, 013024. [Google Scholar]
  20. Wasinphongwanit, P.; Phokharatkul, P. Image Retrieval using Contour Feature with Rough Set Method. In Proceedings of the International Conference on Computer, Mechatronics, Control and Electronic Engineering, Changchun, China, 24–26 August 2010; Volume 6, pp. 349–352. [Google Scholar]
  21. Peters, J.F. Computational Proximity—Excursions in the Topology of Digital Images; Springe: Berlin, Germany, 2016. [Google Scholar]
  22. Pták, P.; Kropatsch, W.G. Nearness in Digital Images and Proximity Spaces. In Proceedings of the 9th International Conference on Discrete Geometry, Nantes, France, 18–20 Apirl 2016; pp. 69–77. [Google Scholar]
  23. Peters, J.F. Visibility in Proximal Delaunay Meshes and Strongerly Near Wallman Proximity. Adv. Math. 2015, 4, 41–47. [Google Scholar]
  24. Ma, J. Content-Based Image Retrieval with HSV Color Space and Texture Features. In Proceedings of the International Conference on IEEE Web Information Systems and Mining, Shanghai, China, 7–8 November 2009; pp. 61–63. [Google Scholar]
  25. Yan, Y.; Ren, J.; Li, Y.; Windmill, J.; Ijomah, W. Fusion of Dominant Colour and Spatial Layout Features for Effective Image Retrieval of Coloured Logos and Trademarks. In Proceedings of the IEEE International Conference on Multimedia Big Data, Beijing, China, 20–22 April 2015; Volume 106, pp. 306–311. [Google Scholar]
  26. Schroder, B.S.W. Ordered Sets: An Introduction; Springer: Berlin, Germany, 2002. [Google Scholar]
  27. Messing, D.S.; Beek, P.V.; Errico, J.H. The MPEG-7 Colour Struture Descriptor: Image Description using Colour and Local Spatial Information. In Proceedings of the 2001 International Conference on Image Processing, Thessaloniki, Greece, 7–10 October 2001; pp. 670–673. [Google Scholar]
  28. Shih, J.L.; Chen, L.H. Colour Image Retrieval Based on Primitives of Colour Moments. IEE Proc. Vis. Image Signal Process. 2002, 149, 370–376. [Google Scholar]
  29. Kasutani, E.; Yamada, A. The MPEG-7 Color Layout Descriptor: A Compact Image Feature Description for High-Speed ImageNideo Segment Retrieval. In Proceedings of the International Conference on Image Processing, Thessaloniki, Greece, 7–10 October 2001; pp. 674–677. [Google Scholar]
Figure 1. (a) The query image; and (b) the color histogram of image (a).
Figure 1. (a) The query image; and (b) the color histogram of image (a).
Algorithms 11 00115 g001
Figure 2. The framework of the content-based image retrieval using proximity space theory.
Figure 2. The framework of the content-based image retrieval using proximity space theory.
Algorithms 11 00115 g002
Figure 3. The left image shows car category results, while the right image shows cup category results.
Figure 3. The left image shows car category results, while the right image shows cup category results.
Algorithms 11 00115 g003
Figure 4. Images selected from COIL-20 dataset.
Figure 4. Images selected from COIL-20 dataset.
Algorithms 11 00115 g004
Figure 5. Ten classes on Corel-1000 dataset.
Figure 5. Ten classes on Corel-1000 dataset.
Algorithms 11 00115 g005
Figure 6. Retrieved images for the Top 20: the dinosaurs category (a); (b) the horses category; and (c) the foods category. The query image is the first image in each class.
Figure 6. Retrieved images for the Top 20: the dinosaurs category (a); (b) the horses category; and (c) the foods category. The query image is the first image in each class.
Algorithms 11 00115 g006
Table 1. Comparisons of precision and recall results among enhanced SURF method and our Proposed Method.
Table 1. Comparisons of precision and recall results among enhanced SURF method and our Proposed Method.
(a) Note: The Results of Enhanced SURF Method.
Object categoryabcdefghijklmnopqrst
O R 6545244325651330541742496034214662353828
O T 2382155692268232313110871689176211055796417184978977
Precision0.2730.2090.4280.4670.0930.2800.4190.9680.050.1010.4620.0640.0540.5960.2190.1100.3370.3610.4270.364
Recall0.9030.6250.3330.5970.3470.9020.1800.4170.750.2360.5830.6800.8330.4720.2920.6390.8610.4860.5280.389
(b) Note: The Results of Our Proposed Method.
Object categoryabcdefghijklmnopqrst
O R 7253554759723324726870727251597272656850
O T 247219589527224233311116176967881197591094531941008977
Precision0.2910.2560.9480.4940.2020.29810.7740.0650.3750.7290.0910.0600.8810.5410.1590.3710.6800.7720.649
Recall10.7780.7630.6520.76410.4580.33310.9170.972110.7220.819110.9440.9580.694
Table 2. Comparisons of precision and recall results among CLD, Color Moment, CSD and our Proposed Method.
Table 2. Comparisons of precision and recall results among CLD, Color Moment, CSD and our Proposed Method.
Category NameCLD [27]Color Moment [28]CSD [29]Proposed Method
PrecisionRecallPrecisionRecallPrecisionRecallPrecisionRecall
African Tribes0.2130.1440.3890.2600.5620.3470.5050.312
Beaches0.5290.2830.2950.1690.2880.1530.4420.209
Buildings0.3410.1570.3820.2110.5530.3410.6640.397
Buses0.2550.1130.5800.3190.4470.2580.7030.435
Dinosaurs0.9070.5100.9380.7850.7940.6140.9620.539
Elephants0.4770.2350.4910.3020.6300.2600.7060.339
Flowers0.7000.2880.6790.3600.5370.2530.7330.300
Horses0.9800.7030.7470.4010.6410.3380.8850.549
Mountains0.5660.3570.2600.1570.6130.2940.4410.229
Foods0.1270.0570.4630.2770.3200.1380.7920.482
Average Precision0.5090.2850.5220.3240.6270.3540.6830.379
Table 3. Precision rate for retrieving the Top 20 database images for each query image.
Table 3. Precision rate for retrieving the Top 20 database images for each query image.
Query ImagePrecisionQuery ImagePrecision
African Tribes46%Elephants59%
Beaches34%Flowers60%
Buildings60%Horses83%
Buses63%Mountains36%
Dinosaurs92%Foods72%

Share and Cite

MDPI and ACS Style

Wang, J.; Wang, L.; Liu, X.; Ren, Y.; Yuan, Y. Color-Based Image Retrieval Using Proximity Space Theory. Algorithms 2018, 11, 115. https://doi.org/10.3390/a11080115

AMA Style

Wang J, Wang L, Liu X, Ren Y, Yuan Y. Color-Based Image Retrieval Using Proximity Space Theory. Algorithms. 2018; 11(8):115. https://doi.org/10.3390/a11080115

Chicago/Turabian Style

Wang, Jing, Lidong Wang, Xiaodong Liu, Yan Ren, and Ye Yuan. 2018. "Color-Based Image Retrieval Using Proximity Space Theory" Algorithms 11, no. 8: 115. https://doi.org/10.3390/a11080115

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop