Next Article in Journal
Urban Land-Cover Dynamics in Arid China Based on High-Resolution Urban Land Mapping Products
Previous Article in Journal
Retrieval of Biophysical Crop Variables from Multi-Angular Canopy Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2017, 9(7), 725; https://doi.org/10.3390/rs9070725
Submission received: 26 May 2017 / Revised: 4 July 2017 / Accepted: 9 July 2017 / Published: 14 July 2017

Abstract

:
Because of recent advances in Convolutional Neural Networks (CNNs), traditional CNNs have been employed to extract thousands of codes as feature representations for image retrieval. In this paper, we propose that more powerful features for high-resolution remote sensing image representations can be learned using only several tens of codes; this approach can improve the retrieval accuracy and decrease the time and storage requirements. To accomplish this goal, we first investigate the learning of a series of features with different dimensions using a few tens to thousands of codes via our improved CNN frameworks. Then, a Principal Component Analysis (PCA) is introduced to compress the high-dimensional remote sensing image feature codes learned by traditional CNNs. Comprehensive comparisons are conducted to evaluate the retrieval performance based on feature codes of different dimensions learned by the improved CNNs as well as the PCA compression. To further demonstrate the powerful ability of the low-dimensional feature representation learned by the improved CNN frameworks, a Feature Weighted Map (FWM), which can perform feature visualization and provides a better understanding of the nature of Deep Convolutional Neural Networks (DCNNs) frameworks, is explored. All the CNN models are trained from scratch using a large-scale and high-resolution remote sensing image archive, which will be published and made available to the public. The experimental results show that our method outperforms state-of-the-art CNN frameworks in terms of accuracy and storage.

Graphical Abstract

1. Introduction

With the rapid development of Earth observation technology, remote imaging sensors with high spatial resolution have led to rapid increases in the volume of acquired remote sensing images. However, the effective management and retrieval of large scale remote sensing image databases represent considerable challenges that must be resolved. As a result, Content-Based High-Resolution Remote Sensing Imagery Retrieval (CB-HRRS-IR), which aims to search for and return the most relevant or similar images using a query image, has drawn increasing attention in recent years [1].
Currently, there are two essential modules that serve as solutions to CB-HRRS-IR: the feature representation module and the feature searching module [2]. Specifically, a feature vector is extracted to describe the visual content of an image in the feature representation module. Based on the extracted features, similarities between a query image and other images from the image database are calculated; then, the system returns the most similar images by ranking similarities. Both modules play important roles in an image retrieval system. Obviously, the length of the image features and the method of similarity measurement have a significant impact on the search efficiency, especially for enormous image archives in which the extracted features can largely influence the retrieval performance because of the ability of the features to represent the images.
To achieve a satisfactory performance for CB-HRRS-IR, this paper focuses on extracting powerful features for better remote sensing image representation. Because a high resolution remote sensing image contains abundant information with a large image size, high dimensional feature vectors with hundreds or even thousands of codes are usually employed for the image representation [2,3]. However, regarding the similarity measurements of different images, especially in a large image database, high-dimensional features will increase the number of computation assumptions greatly. Thus, long feature codes with high dimensions for image representation will have a notable negative impact on the retrieval efficiency. In this paper, we propose and analyze Deep Compact Codes (DCCs) with low dimensions for remote sensing image representations to advance the image retrieval efficiency. Specifically, several schemes are experimentally implemented to learn the DCCs via Deep Convolutional Neural Networks (DCNNs) for remote sensing image representations. Additionally, the CNNs are trained from scratch using a large and high resolution remote sensing image archive that we collected. This archive will be made publicly available to other researchers. Our learned DCCs show a better representation for remote sensing image retrieval. This representation can highly improve the image retrieval performance with respect to both precision and efficiency. In addition, we explore a new method called the Feature Weighted Map (FWM) to assist in the visual understanding of deep features. The FWM can facilitate the process of determining the mechanisms underlying the efficacy of the proposed DCCs method. Furthermore, our proposed visualization method can also provide insights into the information that can be learned from the DCNN frameworks.
The remainder of this paper is organized as follows. In Section 2, we introduce the background and review the most relevant studies on remote sensing image retrieval. In Section 3, we introduce the image feature extraction methods, including the DCC learning schemes and PCA compression, and describe the evaluation methods from both quantitative and visual perspectives. In Section 4, we introduce the large remote sensing image archive, which will be released publicly to other researchers, and provide the experimental and analysis results. In Section 5, we describe the FWM. Finally, in Section 6, we draw conclusions regarding this work.

2. Background and Related Studies

Classical image features, such as spectral features [4,5], texture features [6,7,8], shape features [9,10] and morphological features [7] are the most common features used for remote sensing image representation. Great success with regard to high-resolution remote sensing image retrieval have been achieved using these global features. Compared with global features, local features are also good at representing remote sensing images with high spatial resolution, and they allow for the recognition of a greater range of objects and spatial patterns in small patches. Yang et al. [1] conducted the first investigation of the use of local invariant features for overhead image retrieval. Extensive experiments showed the effectiveness and practicability of local features for high resolution aerial imagery retrieval. Yang et al. [11] proposed a method that represents images using local features based on the typical Bag-of-Words (BoW) framework, which can improve the recognition performance of the remote sensing image retrieval process and reduce the burden of building the image index. Rather than extract either traditional global or local features, Wang et al. [12] and Bosilj et al. [13] explored new methods that can utilize both global and local features for remote sensing image retrieval. Additionally, recent studies have proposed methods that account for structure information in the image representations [14,15,16].
Although accurate features can be extracted via various methods for remote sensing image retrieval, they cannot be easily employed to describe a user’s understanding of an image, which is significant to a user’s intention. In other words, the semantic contents of an image cannot be well revealed by these features. To alleviate this issue, Liu et al. [17] proposed a region-level semantic mining approach for image presentation and constructed a uniform region-based depiction for each image by segmenting the images by region. Then, the semantic features were extracted using a probabilistic method, which had good retrieval precision and recall. Wang et al. [18] proposed a remote sensing image retrieval scheme using image scene semantic matching. In addition, a prototype system that uses a coarse-to-fine retrieval scheme was implemented, and it had good retrieval accuracy. Recently, Linda et al. [19] presented a novel semantic mining and hashing method for remote sensing image retrieval, which showed good performance in their implemented retrieval system.
Most of the features mentioned in the above studies were low-level features that were individually designed, and they have been employed with a certain degree of success in remote sensing image retrieval. However, designing a stable and powerful feature representation for images could be a difficult task. In addition, remote sensing images with high resolution usually represent large geospatial scenes that contain abundant and complex visual contents. These factors can reduce the ability of low-level features to represent high-level abstract concepts in remote sensing images, which is known as the semantic gap between low-level features and high-level semantic content.
Recently, deep learning was shown to achieve considerable success in many tasks, including speech recognition [20,21], object recognition and detection [22,23,24,25] and natural language processing [26,27]. Inspired by such great success, high-level features extracted via deep learning have been introduced in the application of content-based high-resolution remote sensing image retrieval. Zhou et al. [28] utilized an unsupervised feature learning framework based on auto-encoder to map low-level feature descriptors to sparse feature representations for remote sensing image retrieval. Li et al. [2] also employed unsupervised multilayer feature learning and collaborative affinity metric fusion for remote sensing image retrieval. These methods can offer a higher-level feature representation of remote sensing images and outperform conventional features. However, the improvements are limited, and the unsupervised feature learning framework might not provide results that can be generalized because these frameworks are based on shallow networks that increase the difficulty of learning sufficiently powerful feature representations of remote sensing images. Moreover, the features learned by an unsupervised framework might require longer codes for image representation to achieve satisfactory retrieval results. This approach will obviously reduce the image retrieval efficiency. Napoletano [29] conducted an extensive evaluation of visual descriptors for the content-based retrieval of Remote Sensing (RS) images, including global, local, and CNN features. The results demonstrated that CNN-based and local features have the best performance in different retrieval schemes. Zhou et al. [30] investigated the extraction of features from both fully-connected and convolutional layers for remote sensing image retrieval. They [29,30] employed only CNN models and performed fine-tuning on a public remote sensing image dataset for feature extraction. Intensive comparisons were conducted to evaluate the performances of different models. Although these methods have achieved good performance in certain domains via DCNN frameworks, the learned deep features have not been sufficiently evaluated or described.
Visualization studies [23,31,32,33] have been conducted to better understand these deep features. Zeiler [23] developed deconvolutional networks to provide insights into the functions of intermediate feature layers and the operation of the classifier. However, deconvolutional networks do not always work well without max-pooling layers. Based on Zeiler’s work, Springenberg [31] explored a guided backpropagation method that results in qualitative improvements. Zhou [34] developed a visualization method called class activation mapping that can localize objects. However, these methods only focus on the convolutional layers and ignore the fully connected layers, which play significant roles in feature representation. Dosovitskiy [32] and Mahendran [33] developed approaches to studying image representations by inverting deep features at different layers; however, these approaches show only image information rather than object-relating information preserved in the final deep features. Selvaraju [35] used the class-specific gradient information flowing into the final convolutional layer of a CNN to produce a coarse localization map of an object. However, this method can generate only a class-oriented visualization map based on the final classification score and is not applicable to non-classification tasks.
In this paper, we investigate and evaluate the performance of remote sensing image retrieval from a dimensional perspective. We analyze a series of different dimensional features that we call DCCs, which are extracted by improved classical CNNs. In addition, we also perform a PCA to compress the high dimensional feature codes learned by the DCNNs, a strategy that is referred to as Deep Principal Component Analysis (DPCA) in the retrieval experiments. Furthermore, we explore a new method for visualizing deep features to provide a better understanding of our learned DCC features. Compared with the known archives [1,29,36], a high-resolution remote sensing image archive with a much larger scale is used to train the CNNs. The CNNs are trained from scratch to acquire a powerful representation of the remote sensing images and explore the performance of the DCCs in the task of CB-HRRS-IR.
The main contributions of this paper are as follows:
  • We propose the extraction of DCCs for CB-HRRS-IR via two schemes. First, we extract a series of different dimensional DCCs that include a few tens to thousands of codes as the feature representation of remote sensing images. Second, PCA is introduced to compress the high-dimensional remote sensing image feature codes learned by traditional DCNNs. The lower-dimensional feature codes outperform the higher-dimensional ones, and the DCCs outperform the DPCA. In addition, we explore the FWM visualization method for deep features learned by DCNN frameworks, which can help us to better understand the differences between DCC features and the original deep features.
  • Compared with the fine-tuning methods of former studies, we train all the DCNNs from scratch to explore the performance of the DCCs in CB-HRRS-IR based on a large-scale remote sensing image archive. In addition, this large-scale high-resolution remote sensing image archive will be made publicly available to other researchers. We expect that the archive can serve as a standardized public dataset in this field and can help to advance the research in the remote sensing field.

3. Methods

In this section, we first review the off-the-shelf DCNN frameworks and then introduce the DCC feature extraction schemes evaluated in our work. Next, we present the evaluation protocols for the experimental results. Finally, we introduce our proposed visualization approach, which is designed to provide a better understanding of deep features.

3.1. DCNN Framework

A traditional DCNN framework usually consists of several different types of layers, including convolutional layers, pooling layers, and fully connected layers (Figure 1). In a convolutional layer, a certain number of convolutional kernels are used to generate feature maps from the previous layer. A pooling layer is applied to reduce the spatial dimensions of the feature map via an average or max pooling operation. One or several fully connected layers follow a convolutional layer or a pooling layer and constitute the final part of the DCNN framework. Note that the pixel values of a feature map and a fully connected layer are usually mapped via an activation function, such as Rectified Linear Units (ReLUs) [22], Leaky ReLU (LReLU) [37] and the improved Parametric ReLU (PReLU) [38]. These activation functions can improve a CNN framework’s nonlinearity and effectively expedite the convergence of the training procedure and avoid overfitting; thus, they highly boost the framework’s generalization capacity.

3.2. Feature Extraction

3.2.1. Feature Extraction Based on DCC

In general, a DCNN framework takes a raw image as input and processes it using a certain number of convolutional layers; then, it outputs feature maps of the original image in the final convolutional layer. The following fully connected layers then learn to convert the feature maps to a vector for image representation. In our opinion, the final fully connected layers can be regarded as an ordinary neural network for encoding the learned convolutional features. However, many studies have used high-dimensional codes (usually thousands of codes) in the fully connected layers for image representation for the tasks of object recognition, detection [22,23] and image retrieval [2,29,30]. Although these features are already rather compact compared with convolutional features, more compact features must be identified to further enhance the efficiency of the retrieval. Hinton [39] used neural networks for image data dimensionality reduction for the first time. Inspired by this work, we regard the fully connected layers in the DCNN framework to be ordinary neural networks that learn more compact codes (a few tens to thousands of codes) for image representation for the remote sensing image retrieval task.
In our experiments, two classical DCNN frameworks, i.e., Alexnet [22] and VGG-16 [40], are applied to compress the convolutional features. Alexnet includes five convolutional layers followed by three fully connected layers while VGG-16 contains 13 convolutional layers followed by three fully connected layers. The fully connected layers usually learn the high-level abstract feature representation of an image. Additionally, image features are often extracted from the second fully connected layer with high dimensionality. Based on our recognition process, the first fully connected layer (Fc1) encodes the learned feature map as a high dimensional feature vector. The second fully connected layer (Fc2) can then be used to learn more compact feature codes with lower dimensionality from Fc1. Finally, the compact feature codes are involved into a classifier in the third layer (Fc3). Therefore, the Fc2 layer is important for learning DCCs with different dimensionalities and can be regarded as a DCC learning layer (Figure 1). To evaluate the performance of DCC features in the application of remote sensing image retrieval, we set the dimensions of the DCC learning layer to 4096, 1024, 256, 64, and 32. Then, we extract a series of different dimensional deep compact feature codes to further explore the retrieval performance.

3.2.2. Feature Extraction Based on PCA

Although DCCs can be employed to replace the high-dimensional features extracted by the original DCNN frameworks, PCA, which is a classical data compression method with solid statistical foundations, is widely used in dimensionality reduction. To further evaluate the retrieval performance of the proposed DCC schemes, we also adopt PCA to compress the high-dimensional deep feature codes and compare the retrieval performance with that of the DCCs. Specifically, we have a set of n features { f 1 , f 2 , , f n } , f i R D , which are extracted by a raw DCNN framework and form the rows of the feature matrix F R D × n . Our goal is to acquire a compressed feature matrix F R C × n , where C denotes the length of the compressed feature codes. The basic principle of the PCA used to achieve this goal is to compress F via a projection operation, F = U T F , where U R D × C is the projecting matrix. U can be obtained using the following objective function:
m a x t r ( U T F F T U ) s . t . U T U = I
The constraint U T U = I requires the projecting vectors to be orthogonal to one another in such a way that the compressed feature vectors are pairwise decorrelated. Similar to the DCC learning scheme, f i is a deep feature that is extracted from the penultimate fully connected layer of an original DCNN framework. We compress f i to yield shorter feature codes with the same dimensionality as of the DCCs. In the following sections, we use DPCA to refer to the feature codes that are compressed versions of the original deep features of the PCA method.

3.3. Evaluation Methods

3.3.1. Quantitative Evaluation

For the similarity measurements, three state-of-the art distances, which are the most commonly used for image retrieval, are applied: Manhattan distance, Euclidean distance and cosine distance. For the sake of efficiency, we adopt the Manhattan distance to identify the images that are similar to the query. Regarding the retrieval performance, the Precision (P), Recall (R) and mean Average Precision (mAP) are often employed to assess the retrieval results. Precision is defined as the fraction of the retrieved relevant images with respect to the query image, and recall is defined as the ratio of the number of retrieved relevant images to the total number of images that are relevant to the query image. Usually, only the top-k retrieved results are evaluated to determine their precision. The fraction of true relevant images in the top-k results ( P @ k ) is calculated as follows:
P ( k ) = i = 1 k σ ( i ) k
where σ ( i ) indicates the relevance between a query q and the i-th ranked retrieved image. Here, σ ( i ) { 0 , 1 } is 1 if the i-th item is a relevant image and 0 otherwise. To assess the performance of the ranked retrieval results, an interpolated recall-precision curve can be plotted to compare the differences and determine the comprehensive performance of the retrieval schemes. Given a set of Q queries, the mAP can be defined by calculating the Average Precision (AP) for all queries:
m A P = q = 1 Q A P ( q ) Q
where the AP for each query q is defined as follows:
A P = k = 1 R P ( k ) σ ( k ) j = 1 R σ ( j )
where R represents the size of the test dataset.

3.3.2. Visual Evaluation

The point of our explored visualization method is to extract image features from the penultimate fully connected layer and then to employ the backpropagation algorithm to map back the feature codes onto the convolutional feature layer, which yields the FWM. Specifically, for an image, the output of the fully connected layer is seen as a feature vector, and each code of the vector can be regarded as the importance of the corresponding dimensionality in the feature space. Therefore, we backpropagate the extracted feature as weights of the convolutional layers, and a weighted sum of the convolutional feature maps is used to generate the final FWM. Let A k ( x , y ) represent the activation of the k-th feature map of the last convolutional layer at position ( x , y ) . To obtain the importance of a feature code at the d-th dimension f d , which is learned from the activation A k ( x , y ) of a feature map, we first calculate the gradient of f d with respect to A k ( x , y ) :
g k d ( x , y ) = f d A k ( x , y )
Therefore, the whole contribution of A k ( x , y ) to the final feature can be calculated as follows:
G k ( x , y ) = d D g k d ( x , y )
where D is the dimensionality of the output feature. Thus, for every activation at position ( x , y ) of a feature map, we can obtain its weight with respect to the final extracted feature using G k ( x , y ) . However, the information contained in the final feature is not pure because of the image quality; thus, noise information is contained in the feature codes. As a result, this noise is also projected back to the weight of an activation A k ( x , y ) via the above operations. Considering this situation, we adopt the average weight of A k ( x , y ) as the final weight of the k-th feature map:
w k = 1 n ( x , y ) G k ( x , y )
where n is the number of pixels of the feature map. Usually, several feature maps correspond to the channels of the last convolutional layer. To generate the FWM for visualization, the weighted sum of these feature maps is calculated as follows:
F W M = w k A k ( x , y )
Based on this method, we can visualize the information that is contained in the feature maps and preserved in the final feature codes. This approach helps us further understand the learned DCC feature codes and compare them to the traditional codes. Thus, this visualization method is feature oriented and can be generalized to any layer of a CNN framework to provide a representation of the nature of the CNN framework.

4. Experimental Section

4.1. Dataset

Extensive evaluations of retrieval performance were conducted using a large-scale high-resolution remote sensing image dataset composed of 25 classes of different scenes/objects. For each class, 500 images with the size of 256 by 256 pixels were manually collected, primarily from the World Map (World Map web site: http://map.tianditu.com/map/index.html.) website. The image classes are as follows: agricultural, airport, basketball court, bridge, building, container, fishpond, footbridge, forest, greenhouse, intersection, oiltank, overpass, parking lot, plane, playground, residential, river, ship, solar power area, square, tennis court, water, wharf, and workshop. Specifically, the image resolution of each class is 0.6 m per pixel, except for the airport class. Most of the images were collected from the 18-level remote sensing image found on the World Map. However, an image that is 256 by 256 pixels might be too small to contain a large remote sensing scene, such as an airport or an overpass. As a compromise, we collected airport images with a 2.4 m per pixel resolution that include 16-level remote sensing images from World Map; these images can clearly reflect the properties of an airport. For the overpass, we collected corresponding images with a resolution of 1.2 m that contain the main parts of an overpass. Additionally, a small number of plane images were collected from Google Maps as a supplement; the resolution was the same as that for the images from World Map. All the collected images are in the Red-Green-Blue (RGB) color space. Four samples of each class are shown in Figure 2. As explained above, our dataset contains large-scale remote sensing images that vary in resolution, image size, and source. Each of these factors pose challenges to the comprehensive retrieval performance of our experiments. This remote sensing image dataset will be made publicly available to other researchers, and we expect that it will greatly promote research in the remote sensing community, including research related to remote sensing image classification, scene classification, object detection/recognition, and image retrieval.

4.2. Experimental Setup

As introduced in the previous section, we evaluated the retrieval effect using the popular Alexnet [22] and VGG-16 [40] CNNs, and we set the penultimate fully connected layer to be of various dimensionalities (4096, 1024, 256, 64 and 32) to obtain the DCCs. For the dataset, we randomly selected 250 images for each class from the whole dataset as a training dataset and then used the remainder as the test dataset. For the training dataset, we randomly selected 200 images of each class as training samples with the remaining 50 images of each class being used for the validation dataset. Note that we only selected less than half of the images to train the CNNs, which is different from [30], who adopted most of their images from the dataset for the CNNs training. Thus, a situation in which a small number of the samples is used for training can result in overfitting because of the large-scale parameters in the CNNs. To overcome this problem, we employed simple data augmentations to enrich the training dataset. Specifically, the form of the data augmentation consisted of the generation of image horizontal reflections and rotations with degrees of 90, 180 and 270. Compared with previous studies [2,29,30] that trained the CNN models by fine-tuning the pretrained CNN models based on natural images, we trained the CNNs from scratch using high-resolution remote sensing images to avoid the latent differences between the natural images and remote sensing images.
For the required input dimensionality, all images were resized to 227 by 227 pixels for Alexnet and 224 by 224 pixels for VGG-16, as well as their corresponding transformation networks. In addition, the mean values were extracted from the training samples. Following the previous works [22,23], the weights were set using a Gaussian distribution for Alexnet and “Xavier” for VGG-16. We set the learning rate to 0.01 for Alexnet and 0.001 for VGG-16. The batch size was set to 256 for Alexnet and 48 for VGG-16. For a fair comparison, the transformation CNNs were initialized in a manner similar to that of their corresponding original CNNs. All experiments were conducted using the Ubuntu-16.04 system and two Nvidia GTX Titan X GPUs with 12GB RAM.

4.3. Results and Analyses

In this section, we compare the performances of the DCC and DPCA methods for different dimensions. The top-100 retrieval precision results of each class for different methods are evaluated. The top-150 retrieval precision results for each class are also shown for further evaluation. For convenience, we use “ADCC# ”to represent the names of the DCC frameworks: it denotes DCCs with a dimensionality of # learned using the “Alexnet” framework. The same method is also applied to the corresponding VGG frameworks. Note that the length of the features extracted from the original Alexnet and VGG frameworks is 4096. Similarly, we use “APCA#” to represent the names of the DPCA learning methods: it denotes image feature codes learned by the original “Alexnet” framework and then compressed to dimension of # via the PCA method. A similar method is also used for the corresponding VGG frameworks.

4.3.1. Retrieval Performance of DCCs

In this section, the image retrieval performance of the DCC method is evaluated. The top-100 retrieval precision results for each class are shown in Table 1. In Table 1, there are two results reported in bold in each row. The first result is the best retrieval result for Alexnet and the corresponding DCC models; the second one is the best retrieval result for VGG and the corresponding models. As shown in Table 1, most of the best results for Alexnet and its DCC models are obtained using ADCC64 and ADCC32, and all the best results are obtained using our DCC models. Specifically, the findings indicate that precision is greatly increased by our DCC models for all classes compared with results of the original Alexnet model. The river class in the ADCC64 model shows the largest increase in precision, with a significant improvement of nearly 20%. Moreover, nearly all our DCC models achieve obvious improvements in the retrieval results for each class compared with the results of the original Alexnet model. For VGG and its corresponding DCC models, most of the best results are achieved using VDCC64, and there is also a high improvement in the retrieval precision for each class compared with the original VGG model. Several of the best retrieval results are not obtained using VGG64, and the precision achieved using VDCC64 is comparable with the best results. Additionally, the best improvement is observed for the wharf class, for which the precision increases from 51.02% to 75.32%. Thus, the precision of the VDCC64 models is more than 24% higher than that of the original VGG model. Compared with the ADCC32 model, which achieved several of the best retrieval results, the VDCC32 model does not generate the best retrieval results for any class, and certain classes appear to show decreases in retrieval precision. Nevertheless, the VDCC32 model achieves prominent improvements in the retrieval precision for 60% of the classes and certain improvements in retrieval precision are noticeable compared with the results of the original VGG model, such as for the wharf and building classes, which showed precision improvements of 18.66% and 14.95%, respectively. Table 2 shows the top-150 retrieval precision results for all classes. The retrieval performance based on the P@150 values shows a similar trend as that based on the P@100 values. The greatest improvements in precision are observed for the river class using the ADCC64 model and the wharf class using the VDCC64 model. Taken together, all our DCC models can generate significant improvements in the retrieval precision compared with their corresponding original frameworks.
For Alexnet and its DCC models, several of the best results are obtained using ADCC32, and the corresponding results obtained using ADCC64 are similar to those of ADCC32. For the VDCC32 model, a sharp decline in the precision is observed compared with that of the VDCC64 model. These results indicate that image features with a dimensionality of 64 may be the most optimal for adapting to the image representation. To comprehensively assess the performances of different models, we compare the model accuracies of these methods, which are calculated according to the mean of retrieval precisions of each class achieved by the CNN model. The model accuracies based on the P@100 and P@150 values are listed in Table 3 and Table 4, respectively. Additionally, the best results are reported in bold. Table 3 and Table 4 show that the model accuracy clearly increases as the feature dimensionality is reduced from 4096 to 64. ADCC64 and VDCC64 achieve the best results at both the P@100 and P@150 levels, and remarkable accuracy improvements can be observed. Specifically, at the P@100 level, ADCC64 and VDCC644 show accuracy improvements of 8.81% and 8.15% compared with the results of the corresponding original CNN frameworks, respectively. At the P@150 level, ADCC64 and VDCC644 show accuracy improvements of 9.37% and 9.06% compared with the results of the baseline CNN frameworks, respectively. These findings are in accordance with the class precision results shown in Table 1 and Table 2. In Table 3 and Table 4, each row shows approximate accuracy improvements as the dimensions of the DCCs decrease from 4096 to 1024, 256 and 64. Thus, the efficacy of our proposed DCC method is demonstrated by the performances of different types of CNN frameworks. Regarding the 32-dimensional feature codes, both ADCC32 and VDCC32 show decreases in model accuracy compared with the corresponding 64-dimensional feature codes, especially VDCC32. This finding indicates that 64-dimensional feature codes are more effective for image representation than feature codes with other dimensions, which confirms our former hypothesis. Moreover, even when decreases in accuracy are observed for the 32-dimensional features compared with 64-dimensional features, the ADCC32 and VDCC32 models still achieve significantly higher accuracies than those of the Alexnet and VGG models, respectively.
Specifically, a comparison of the same dimensional feature codes from different frameworks shows that VGG and its corresponding DCC models achieve higher accuracies than those of Alexnet and its corresponding DCC models. This finding makes sense because VGG and its corresponding DCC models are deep CNN frameworks, while Alexnet and its corresponding DCC models are shallow CNN frameworks. Nevertheless, a comparison of the results of the DCC models of Alexnet with those of the original VGG model shows that the accuracies of the DCC models are considerably higher than those of the VGG model, especially for the Alexnet64 and Alexnet32 models, which present accuracy improvements of 6.89% and 6.30%, respectively. These findings demonstrate that the feature codes with low dimensionality learned by our DCC models have more powerful image representation abilities compared with the high-dimensional features codes. This finding reveals that high-dimensional features learned by traditional CNN frameworks do not always have the best image representation abilities, whereas the lower-dimensional features that can be learned by our DCC method can greatly improve upon the performance of the original frameworks. More importantly, these results indicate that our proposed DCC method can achieve a better performance using a shallow CNN framework rather than a deep CNN framework. The superiority of this approach is obvious. Our proposed DCC method is easy to use, and it can also reduce the training time requirements and improve the convenience of the applications. This discovery can also be employed for many other tasks, such as image classification and object detection, to further advance the performance and efficiency of the method because of the shorter and more powerful feature representation capacity.
For further evaluation, we employ a confusion matrix to show the classification results for 64- and 4096-dimensional features extracted from different models, as shown in Table 5, Table 6, Table 7 and Table 8. For simplicity, let C1, C2, …, C25 represent agricultural, airport, basketball court, bridge, building, container, fishpond, footbridge, forest, greenhouse, intersection, oiltank, overpass, parking lot, plane, playground, residential, river, ship, solar power area, square, tennis court, water, wharf and workshop, respectively. Each column in Table 5, Table 6, Table 7 and Table 8 corresponds to the prediction result, while each row represents the actual class. Note that the last column and row correspond to the classification and prediction precision, respectively. In addition, the overall accuracy is reported in bold in the right-bottom cell. As seen from Table 5 and Table 6, even though there are slight decreases in classification precision for several classes, most of the classes experienced improvements in classification precision when the ADCC64 model was used. In addition, the precision improvements are more obvious compared to the decreases in classification precision, such as for river (C18), square (C21), tennis court (C22), which achieved a 9.60%, 6.80%, and 7.60% higher classification precision, respectively, when our DCC method was used, respectively. In addition, the prediction precision of each class shows a similar trend. The intersection class achieved a 10.70% improvement in prediction precision when the ADCC64 model was used. For the VGG and VDCC64 models (Table 7 and Table 8), most classes experienced precision improvements in both classification and prediction when our DCC method was used. However, the playground (C16) class suffered a 5.60% decrease in classification precision when the VDCC64 model was used. This result occurred because some playground samples are more easily classified as basketball court (C3) samples, as shown in Table 8. In addtion, some tennis court (C22) samples are also classified wrongly as basketball court samples. This led to a 3.78% decrease in the prediction precision of the basketball court class. The main reason for this phenomenon is that there is a certain similarity between the backgrounds of the tennis court, playground and basketball court samples to some extent. The VDCC64 model is more capable of discriminating the basketball court class, as the classification precision for the basketball court class using this model experienced a 10.80% improvement. This result implies that the VDCC64 model can learn to discriminate the basketball court class from other classes. It also reveals that VDCC64 has some deficiencies in discriminating similar classes. Nevertheless, our DCC method can achieve a higher overall classification accuracy. For the ADCC64 model, the overall classification accuracy is 86.56%, which is nearly 2.00% higher than that of the original Alexnet model (84.58%). In addition, the VGG64 model achieves an approximately 1.50% improvement in overall classification accuracy (88.37%) compared with the original VGG model (86.88%). In general, our DCC models can achieve better classification results for most classes.

4.3.2. Retrieval Performance of the DPCA

In this section, we show the retrieval performance achieved using the DPCA scheme described in Section 3. Table 9 and Table 10 show the top-100 and top-150 retrieval precision results for each class, respectively. The results show that when we compress the original deep feature codes to dimensions of 1024, 256 and 64, only a few classes show improved retrieval precision. The 32-dimensional features show better retrieval performance for certain classes. The wharf class shows the greatest increase in precision when the original deep features are compressed via the PCA method. Specifically, at the P@100 level, an 8.45% precision improvement is achieved by compressing the deep features from the Alexnet model and a 17.57% precision improvement is achieved by compressing the deep features from the VGG model, whereas at the P@150 level, 8.75% and 18.33% precision improvements are achieved by compressing the deep features from the Alexnet and VGG models, respectively. However, Table 9 and Table 10 show that the improvements in retrieval performance are limited compared with the results based on the original deep features. Many of the best retrieval results are obtained using the original deep features. In other words, the retrieval performance decreases for certain classes when extracting the feature codes via the PCA method. This finding indicates that certain important feature information can be lost while compressing the deep features to lower dimensionalities. A comparison of the information in Table 1 and Table 2 shows that our DCC method can learn shorter feature codes and achieve much better retrieval performance.
Table 11 and Table 12 show the comprehensive retrieval accuracies for the different dimensional features for the entire dataset. As shown in Table 11 and Table 12, sharp decreases in the comprehensive retrieval performance are observed when compressing the original deep features to different dimensional features. Note that the reduction in precision declines as the dimensionality is reduced. When deep features are compressed to a dimensionality of 32, retrieval precision improvements occur; this finding corresponds to the information presented in Table 9 and Table 10. Based on this observation, we also calculate the retrieval accuracies achieved when the deep features are compressed to a dimensionality of 16. Specifically, at the P@100 level, 16-dimensional features compressed from the deep features of the Alexnet framework achieve a retrieval precision of 69.05%, which is 0.30% lower than the precision of the original deep features. With regard to the deep features of the VGG framework, the compressed 16-dimensional features achieve a retrieval precision of 73.01%, which is 1.73% higher than the precision of the original deep features. The P@150 level generally shows the same results. Nevertheless, all these results show reduced performance compared with that of the compressed 32-dimensional features, as shown in Table 11 and Table 12. A comparison of our DCC method, which can achieve significant improvements in the retrieval performance for all the different features at lower dimensionalities, with the DPCA method indicates that the DPCA method only achieves limited improvements in retrieval performance for the 32- and 16-dimensional features. This phenomenon mainly occurs because the PCA method is a linear compression scheme, while the features of remote sensing images should have non-linear relations. This finding indicates that our proposed DCC method is an efficient method for learning more powerful image feature representations with lower dimensionalities. Figure 3 shows a query image of a wharf and the corresponding top-5 irrelevant retrieval images obtained using the Alexnet based DCC and DPCA methods. The results show that the orders of irrelevant images obtained using the ADCC32, ADCC64 and ADCC256 models are much greater than those obtained using other models at the same position, which indicates that our proposed DCC method has a better retrieval performance. Specifically, ADCC64 shows the best results, which is consistent with the results shown in Table 1, Table 2, Table 3 and Table 4 and prior analyses.

4.3.3. Comparison

In this section, we compare the performances of the DCC and DPCA methods. The mAP results for the evaluated methods are listed in Table 13, which shows that all DCC frameworks substantially outperform the baseline CNN frameworks. Compared with the baseline frameworks, the 64-dimensional features extracted by our proposed DCC method based on the Alexnet or VGG frameworks achieve the best results. Specifically, absolute mAP increases of 8.51% and 8.64% are observed for the 64-dimensional features using the Alexnet- and VGG-based DCC frameworks, respectively. For the DPCA method, only 32-dimensional features compressed from the deep features of the VGG framework achieve better mAP performance compared with the baseline framework. However, only the VPCA32 features can improve the performance of the image retrieval. In contrast, the 32-dimensional features extracted by our proposed DCC method can highly outperform those of the DPCA method. Furthermore, the mAP values of all DCC frameworks are greater than those of the DPCA method, as shown in Table 13.
The recall-precision curves of the different DCC and DPCA methods are plotted in Figure 4. As shown, low-dimensional features obtained using our proposed DCC method outperform those of the baseline frameworks (Figure 4a,b), which is also consistent with Table 3 and Table 13. Specifically, for Alexnet-based frameworks, 64-dimensional features have the best performance, with high recall and precision retrieval results (Figure 4a). Note that the 32-dimensional DCC features based on Alexnet achieve much better results at lower recall levels than the corresponding baseline features, which is very desirable for precision-oriented image retrieval. For high recall levels, the 256-dimensional features have the best performance. Therefore, this amount is suitable for recall-oriented image retrieval. For VGG-based frameworks, the 64-dimensional features also outperform all the other dimensional features and show comparative results at high recall levels, even when compared with the 256-dimensional features (Figure 4b). For the DPCA method, only 32-dimensional features show comparative results when compared with the original deep features (Figure 4c,d). In general, the 64-dimensional features based on our proposed DCC frameworks show the best results, and they dramatically improve the retrieval performance.
A further comparison based on the above analyses is performed by plotting the best results of each framework in Figure 5. The results reveal that the 64-dimensional features extracted by our DCC frameworks significantly outperform those of the baseline frameworks and the DPCA method, which demonstrates the effectiveness and practicability of our proposed DCC method for remote sensing image retrieval. Specifically, the findings indicate that the 64-dimensional DCC features based on the Alexnet and the VGG frameworks generally have the same performance. Note that Alexnet-based DCC models are shallow frameworks, whereas VGG-based models are much deeper. This comparison shows that our proposed DCC method can use a shallow CNN framework to realize a performance comparable to that achieved using a much deeper CNN framework. Furthermore, the 64-dimensional DCC features based on the Alexnet framework achieve much better retrieval results than the original VGG framework, which is also consistent with the results shown in Table 3 and Table 13. This finding reveals that our proposed DCC method can greatly improve upon the performance of a shallow CNN framework and can be used to obtain a greater precision than that achieved using a deeper CNN framework, which shows the advantages of improving storage and efficiency simultaneously.

5. Visual Understanding of the DCC

The FWM can help us better understand the nature of DCC features via visualization. Based on previous results and analyses, the ADCC64 and VDCC64 models show the best retrieval performances, which demonstrates that the 64-dimensional DCC features can represent an image more appropriately than the 4096-dimensional features extracted from traditional DCNN frameworks. Therefore, the ADCC64 and VDCC64 models are selected for the weighted feature visualization. We also show the FWM based on the traditional Alexnet and VGG DCNN frameworks for comparison. Specifically, 10 samples of 25 classes from the dataset are randomly selected for visualization. The results of the FWMs are shown in Figure 6, which indicates that our proposed FWM method can successfully illustrate the regions of objects in an image. Red regions may include intersections with objects in the images, which demonstrates the efficiency of our proposed FWM visualization method. Simultaneously, red regions in the FWMs indicate the features that are focused on by the DCNN frameworks; this information is preserved in the final deep features. This arrangement helps explain why the learned deep features provide accurate representations of the images.
A comparison of the FWMs based on the ADCC64 and VDCC64 models with those based on the baseline CNN frameworks shows that the former frameworks could indicate the localization and extent of objects more precisely in the images for certain classes, such as playground, parking lot and tennis court. However, for the FWMs based on the original Alexnet and VGG frameworks, a greater amount of information is scattered in a messy manner, which could represent the noisy information of objects. This information can also be learned and preserved in the final feature codes; thus, it may have a negative influence on the retrieval performance. This circumstance helps us intuitively understand how our proposed DCC frameworks can significantly outperform the traditional Alexnet and VGG frameworks for these classes. This approach also corresponds to the retrieval precision results shown in Table 1. Although noisy information is shown in the FWM of the footbridge class, our proposed DCC methods tend to focus more precisely on the position and region of the footbridge. For the building class, the DCC frameworks tend to learn more aggregated information from the building top, which is much different from the scattered information found by the original frameworks, especially for the VGG model. Similarly, the VDCC64 model remarkably improves the retrieval performance of the building class by 17.10% compared with the original VGG model. For the plane class, the DCC method can learn more information from different parts of a plane. Regarding the FWM of the oil tank, the DCC and traditional frameworks can distinguish all the oil tanks in the image. Note that the VDCC64 model has better results compared with the VGG model. For the airport and bridge classes, all CNN frameworks tend to learn the background information of the objects rather than the airport runways or bridge bodies. This tendency means that the CNN frameworks attempt to discern the objects from their backgrounds. The DCNN frameworks do not always learn object information, which is similar to human cognition. Additionally, the background information is vital to class discrimination information learning. Based on these observations, the proposed FWM visualization method based on our DCC frameworks should also be applicable to class discrimination, scene recognition and other non-classification tasks.

6. Conclusions

In this work, we propose learning DCCs for CB-HRRS-IR. Extensive experiments are conducted to learn feature codes with dimensionalities extending from tens to thousands for image retrieval based on a large-scale remote sensing image archive. A PCA is also employed to compress the deep features. The experimental results reveal that our proposed DCC method can remarkably outperform traditional DCNN frameworks and the DPCA method. Additionally, the 64-dimensional DCC features yield the best retrieval results. To further understand the learned deep features, we explore a feature-oriented visualization method FWM, which demonstrates that our proposed DCC method can learn more powerful information for image representation. This feature-oriented visualization method can also be generalized to any CNN framework and generate FWMs from any layer for a classification-oriented or non-classification task.

Acknowledgments

This research is supported by the National Key Research and Development Program of China (No. 2016YFB0502602), Internally Funded Projects by LIESMARS and Fundamental Research Funds for the Central Universities.

Author Contributions

Z.X., Y.L. and D.L. conceived and designed the experiments; Y.L. performed the experiments, analyzed the data and wrote the paper; C.W., G.T. and J.L. contributed materials.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, Y.; Newsam, S. Geographic image retrieval using local invariant features. IEEE Trans. Geosci. Remote Sens. 2013, 51, 818–832. [Google Scholar] [CrossRef]
  2. Li, Y.; Zhang, Y.; Tao, C.; Zhu, H. Content-Based High-Resolution Remote Sensing Image Retrieval via Unsupervised Feature Learning and Collaborative Affinity Metric Fusion. Remote Sens. 2016, 8, 709. [Google Scholar] [CrossRef]
  3. Demir, B.; Bruzzone, L. Hashing-Based Scalable Remote Sensing Image Search and Retrieval in Large Archives. IEEE Trans. Geosci. Remote Sens. 2016, 54, 892–904. [Google Scholar] [CrossRef]
  4. Bretschneider, T.; Cavet, R.; Kao, O. Retrieval of remotely sensed imagery using spectral information content. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toronto, ON, Canada, 24–28 June 2002; Volume 4, pp. 2253–2255. [Google Scholar]
  5. Geng, W.; Zhang, J.; Zhuo, L.; Liu, J.; Chen, L. Creating Spectral Words for Large-Scale Hyperspectral Remote Sensing Image Retrieval. Pacific Rim Conference on Multimedia; Springer: New York, NY, USA, 2016; pp. 116–125. [Google Scholar]
  6. Hongyu, Y.; Bicheng, L.; Wen, C. Remote sensing imagery retrieval based-on Gabor texture feature classification. In Proceedings of the 7th IEEE International Conference on Signal, Beijing, China, 31 August–4 September 2004; Volume 1, pp. 733–736. [Google Scholar]
  7. Aptoula, E. Remote sensing image retrieval with global morphological texture descriptors. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3023–3034. [Google Scholar] [CrossRef]
  8. Yang, F.P.; Hao, M.L. Effective Image Retrieval Using Texture Elements and Color Fuzzy Correlogram. Information 2017, 8, 27. [Google Scholar] [CrossRef]
  9. Ma, A.; Sethi, I.K. Local shape association based retrieval of infrared satellite images. In Proceedings of the Seventh IEEE International Symposium on Multimedia (ISM’05), Irvine, CA, USA, 12–4 December 2005; pp. 551–557. [Google Scholar]
  10. Scott, G.J.; Klaric, M.N.; Davis, C.H.; Shyu, C.R. Entropy-balanced bitmap tree for shape-based object retrieval from large-scale satellite imagery databases. IEEE Trans. Geosci. Remote Sens. 2011, 49, 1603–1616. [Google Scholar] [CrossRef]
  11. Yang, J.; Liu, J.; Dai, Q. An improved Bag-of-Words framework for remote sensing image retrieval in large-scale image databases. Int. J. Digit. Earth 2015, 8, 273–292. [Google Scholar] [CrossRef]
  12. Wang, Y.; Zhang, L.; Tong, X.; Zhang, L.; Zhang, Z.; Liu, H.; Xing, X.; Mathiopoulos, P.T. A three-layered graph-based learning approach for remote sensing image retrieval. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6020–6034. [Google Scholar] [CrossRef]
  13. Bosilj, P.; Aptoula, E.; Lefèvre, S.; Kijak, E. Retrieval of Remote Sensing Images with Pattern Spectra Descriptors. ISPRS Int. J. Geo-Inf. 2016, 5, 228. [Google Scholar] [CrossRef]
  14. Pham, M.T.; Mercier, G.; Regniers, O.; Michel, J. Texture Retrieval from VHR Optical Remote Sensed Images Using the Local Extrema Descriptor with Application to Vineyard Parcel Detection. Remote Sens. 2016, 8, 368. [Google Scholar] [CrossRef]
  15. Du, Z.; Li, X.; Lu, X. Local Structure Learning in High Resolution Remote Sensing Image Retrieval. Neurocomputing 2016, 207, 813–822. [Google Scholar] [CrossRef]
  16. Zeng, Z. A Novel Local Structure Descriptor for Color Image Retrieval. Information 2016, 7, 9. [Google Scholar] [CrossRef]
  17. Liu, T.; Zhang, L.; Li, P.; Lin, H. Remotely sensed image retrieval based on region-level semantic mining. EURASIP J. Image Video Process. 2012, 2012, 4. [Google Scholar] [CrossRef]
  18. Wang, M.; Song, T. Remote Sensing Image Retrieval by Scene Semantic Matching. IEEE Trans. Geosci. Remote Sens. 2013, 51, 2874–2886. [Google Scholar] [CrossRef]
  19. John, L.M.; Bhandari, K.A. A Novel Method for Satellite Image Retrieval using Semantic Mining and Hashing. Int. J. Comput. Appl. 2016, 147, 33–36. [Google Scholar]
  20. Hinton, G.; Deng, L.; Yu, D.; Dahl, G.E.; Mohamed, A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.N. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups. IEEE Signal Process. Mag. 2012, 29, 82–97. [Google Scholar] [CrossRef]
  21. Yu, D.; Seltzer, M.L.; Li, J.; Huang, J.T.; Seide, F. Feature Learning in Deep Neural Networks—Studies on Speech Recognition Tasks. arXiv, 2013; arXiv:1301.3605. [Google Scholar]
  22. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the International Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
  23. Zeiler, M.D.; Fergus, R. Visualizing and Understanding Convolutional Networks. arXiv, 2013; arXiv:1311.2901v3. [Google Scholar]
  24. Cheng, G.; Zhou, P.; Han, J. Learning rotation-invariant convolutional neural networks for object detection in VHR optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7405–7415. [Google Scholar] [CrossRef]
  25. Long, Y.; Gong, Y.; Xiao, Z.; Liu, Q. Accurate Object Localization in Remote Sensing Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2017, 55, 2486–2498. [Google Scholar] [CrossRef]
  26. Huang, E.H.; Socher, R.; Manning, C.D.; Ng, A.Y. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers, Jeju Island, Korea, 8–14 July 2012; Volume 1, pp. 873–882. [Google Scholar]
  27. Mikolov, T.; Yih, W.T.; Zweig, G. Linguistic Regularities in Continuous Space Word Representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Atlanta, GA, USA, 9–15 June 2013; Volume 13, pp. 746–751. [Google Scholar]
  28. Zhou, W.; Shao, Z.; Diao, C.; Cheng, Q. High-resolution remote-sensing imagery retrieval using sparse features by auto-encoder. Remote Sens. Lett. 2015, 6, 775–783. [Google Scholar] [CrossRef]
  29. Napoletano, P. Visual descriptors for content-based retrieval of remote sensing images. arXiv, 2016; arXiv:1602.00970. [Google Scholar]
  30. Zhou, W.; Newsam, S.; Li, C.; Shao, Z. Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval. Remote Sens. 2017, 9, 489. [Google Scholar] [CrossRef]
  31. Springenberg, J.T.; Dosovitskiy, A.; Brox, T.; Riedmiller, M. Striving for simplicity: The all convolutional net. arXiv, 2014; arXiv:1412.6806. [Google Scholar]
  32. Mahendran, A.; Vedaldi, A. Understanding deep image representations by inverting them. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5188–5196. [Google Scholar]
  33. Dosovitskiy, A.; Brox, T. Inverting visual representations with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 27–30 June 2016; pp. 4829–4837. [Google Scholar]
  34. Zhou, B.; Khosla, A.; Lapedriza, A.; Oliva, A.; Torralba, A. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  35. Selvaraju, R.R.; Das, A.; Vedantam, R.; Cogswell, M.; Parikh, D.; Batra, D. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. arXiv, 2016; arXiv:1610.02391. [Google Scholar]
  36. Xia, G.S.; Yang, W.; Delon, J.; Gousseau, Y.; Sun, H.; Maître, H. Structural High-resolution Satellite Image Indexing. In Proceedings of the ISPRS TC VII Symposium—100 Years ISPRS, Vienna, Austria, 5–7 July 2010; pp. 298–303. [Google Scholar]
  37. Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. In Proceedings of the ICML 2013: The 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; Volume 30. [Google Scholar]
  38. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Boston, MA, USA, 7–12 June 2015; pp. 1026–1034. [Google Scholar]
  39. Hinton, G.E.; Salakhutdinov, R.R. Reducing the Dimensionality of Data with Neural Networks. Science 2006, 313, 504. [Google Scholar] [CrossRef] [PubMed]
  40. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
Figure 1. Framework of the DCC and FWM. An input image is mapped via the CNN to learn DCCs as image feature representations. The different colors of the DCC features indicate the different importance of each element in the feature space.
Figure 1. Framework of the DCC and FWM. An input image is mapped via the CNN to learn DCCs as image feature representations. The different colors of the DCC features indicate the different importance of each element in the feature space.
Remotesensing 09 00725 g001
Figure 2. Ground truth dataset containing 500 images of each class. Four samples of the following classes are shown: (a) agricultural; (b) airport; (c) basketball court; (d) bridge; (e) building, (f) container; (g) fishpond; (h) footbridge; (i) forest; (j) greenhouse; (k) intersection; (l) oiltank; (m) overpass; (n) parking lot; (o) plane; (p) playground; (q) residential; (r) river; (s) ship; (t) solar power area; (u) square; (v) tennis court; (w) water; (x) wharf, and (y) workshop.
Figure 2. Ground truth dataset containing 500 images of each class. Four samples of the following classes are shown: (a) agricultural; (b) airport; (c) basketball court; (d) bridge; (e) building, (f) container; (g) fishpond; (h) footbridge; (i) forest; (j) greenhouse; (k) intersection; (l) oiltank; (m) overpass; (n) parking lot; (o) plane; (p) playground; (q) residential; (r) river; (s) ship; (t) solar power area; (u) square; (v) tennis court; (w) water; (x) wharf, and (y) workshop.
Remotesensing 09 00725 g002
Figure 3. Top-5 retrieval results that are irrelevant to the query image.
Figure 3. Top-5 retrieval results that are irrelevant to the query image.
Remotesensing 09 00725 g003
Figure 4. Recall-precision curves for the different methods. (a) DCC method based on Alexnet; (b) DCC method based on VGG; (c) DPCA method based on Alexnet; and (d) DPCA method based on VGG.
Figure 4. Recall-precision curves for the different methods. (a) DCC method based on Alexnet; (b) DCC method based on VGG; (c) DPCA method based on Alexnet; and (d) DPCA method based on VGG.
Remotesensing 09 00725 g004
Figure 5. Different methods with the best performance.
Figure 5. Different methods with the best performance.
Remotesensing 09 00725 g005
Figure 6. Feature Weighted Maps of the DCC models. The first column of each row is the original image. The second to the fourth columns are the FWMs extracted by the ADCC64, Alexnet, VDCC64 and VGG models, respectively.
Figure 6. Feature Weighted Maps of the DCC models. The first column of each row is the original image. The second to the fourth columns are the FWMs extracted by the ADCC64, Alexnet, VDCC64 and VGG models, respectively.
Remotesensing 09 00725 g006
Table 1. P@100 values obtained using the models based on the Alexnet, VGG and corresponding DCC frameworks.
Table 1. P@100 values obtained using the models based on the Alexnet, VGG and corresponding DCC frameworks.
ClassAlexnetADCC1024ADCC256ADCC64ADCC32VGGVDCC1024VDCC256VDCC64VDCC32
agricultural0.52410.54480.60050.67700.68820.69080.65750.70820.71540.5986
airport0.59790.65040.72740.73260.74140.55940.60370.67710.71160.6675
basketball court0.43840.52270.51110.56110.53220.34780.38190.41860.45090.4352
bridge0.83440.87560.90040.90150.89000.78310.86120.87760.89530.8441
building0.59650.64910.65020.67280.66870.51190.59440.62360.68290.6614
container0.66880.72320.77520.78960.78590.77180.78630.80160.80570.7821
fishpond0.82890.87030.88100.87480.85900.81600.83220.83000.85410.8450
footbridge0.62310.67230.69670.73040.73500.71770.74040.77660.81130.6802
forest0.91260.90570.90570.92720.92900.94070.94190.94950.95020.9075
greenhouse0.92340.93160.94460.94150.94440.94580.94970.95590.95450.9132
intersection0.67070.71740.78500.81470.79960.81960.83790.83750.84620.8154
oiltank0.80430.84980.88790.88100.89510.82230.85710.90640.91280.8469
overpass0.66240.69030.77930.78250.78700.77680.80160.82880.86960.8230
parking lot0.53760.58150.61980.66300.65490.57110.62560.67200.71490.6664
plane0.85020.86660.88190.87290.88470.77190.82980.85130.89800.8534
playground0.70810.71620.75560.77090.74750.70970.74640.74060.74260.6855
residential0.70200.74590.71940.74660.74220.73070.74390.77560.77920.7663
river0.46690.50800.60920.66550.64160.51150.58740.59660.65750.5987
ship0.88850.87990.89820.89360.86540.92450.92080.93800.90550.8886
solar power area0.75020.77320.77570.82140.81640.81570.82350.84580.83740.7908
square0.50820.51870.60740.64350.65700.51530.53230.60530.66340.6362
tenniscourt0.45150.50430.53660.59280.61030.49150.54600.62490.62570.3984
water0.87260.90080.90030.89200.86090.92490.90530.91490.91420.8043
wharf0.64760.73250.76020.80040.78580.51020.65300.72720.75320.6968
workshop0.86760.87030.88020.88990.87050.83780.86580.87300.90330.8709
Table 2. P@150 values obtained using the models based on the Alexnet, VGG and corresponding DCC frameworks.
Table 2. P@150 values obtained using the models based on the Alexnet, VGG and corresponding DCC frameworks.
ClassAlexnetADCC1024ADCC256ADCC64ADCC32VGGVDCC1024VDCC256VDCC64VDCC32
agricultural0.47220.49810.55910.63370.64610.64200.61060.66280.66960.5462
airport0.55310.61210.68650.69310.69800.50180.55230.62320.65910.6241
basketball court0.41720.49550.48270.52930.50480.32460.36030.40030.42820.4136
bridge0.80010.86280.88560.89150.87320.73090.82640.85800.87270.8140
building0.54960.61300.61980.63340.62590.46970.55330.58490.64550.6284
container0.60940.67670.74720.75470.75020.71270.74360.76710.77090.7311
fishpond0.79860.84740.86360.84590.83090.78170.80630.79600.82500.8110
footbridge0.57930.62890.65650.68790.69710.67770.69740.73320.77130.6369
forest0.88680.88940.89230.91650.92160.92810.93400.93700.93620.8834
greenhouse0.90320.91440.92780.91380.92410.93290.93600.94400.94230.8972
intersection0.62570.68240.75220.77290.75850.77900.79790.80310.80480.7796
oiltank0.76670.82280.86500.85710.86370.77550.81860.87570.87820.8170
overpass0.61500.65250.74740.74970.74640.72950.76420.79680.83640.7926
parking lot0.49800.54630.58390.62500.61290.52410.58410.62030.67150.6221
plane0.80740.83350.85120.84370.84530.71070.78660.81190.86350.8119
playground0.67070.68020.72400.73090.70030.66170.70170.70270.69970.6330
residential0.65780.71490.69000.71250.69440.68380.70030.74320.74130.7366
river0.42040.46070.56870.61420.58530.45810.53600.55060.60890.5363
ship0.86310.86380.88750.86820.84010.89340.89600.92400.88730.8650
solararea0.71390.74470.75130.78270.77860.76850.78620.81310.80400.7535
square0.46620.48270.56760.60120.60450.45800.48410.55660.61790.5989
tennis court0.41300.47570.50780.55530.57260.44260.49410.58060.58740.3786
water0.85540.88830.89090.88300.84870.92350.90360.91450.91060.7599
wharf0.60240.69450.72640.76890.75020.45490.60350.68230.72090.6458
workshop0.83870.83930.85780.86140.84010.79690.83080.83780.87460.8301
Table 3. Retrieval accuracies of the models based on Alexnet, VGG and corresponding DCC frameworks according to the P@100 values.
Table 3. Retrieval accuracies of the models based on Alexnet, VGG and corresponding DCC frameworks according to the P@100 values.
ModelAccuracyModelAccuracy
Alexnet69.35%VGG71.27%
ADCC102472.80%VDCC102474.50%
ADCC25675.96%VDCC25677.43%
ADCC6478.16%VDCC6479.42%
ADCC3277.57%VDCC3273.91%
Table 4. Retrieval accuracies of the models based on Alexnet, VGG and corresponding DCC frameworks according to the P@150 values.
Table 4. Retrieval accuracies of the models based on Alexnet, VGG and corresponding DCC frameworks according to the P@150 values.
ModelAccuracyModelAccuracy
Alexnet65.54%VGG67.05%
ADCC102469.68%VDCC102470.83%
ADCC25673.17%VDCC25674.08%
ADCC6474.91%VDCC6476.11%
ADCC3274.05%VDCC3270.19%
Table 5. Confusion matrix of classification results based on Alexnet.
Table 5. Confusion matrix of classification results based on Alexnet.
ClassC1C2C3C4C5C6C7C8C9C10C11C12C13C14C15C16C17C18C19C20C21C22C23C24C25P(%)
C119451000522410901205053461077.60
C2319811401103531017121035000079.20
C33014704103100206032510053802058.80
C420023100300001102000300024192.40
C50110193316003221501122003401077.20
C6111002110000110193060032000184.40
C700000023110110102006210301092.40
C811006101900014111180013002000176.00
C901000001241000010001030020096.40
C1000000000024800100100000000099.20
C1110000002202290531010032010091.60
C1201001001001231241000014000392.40
C13110000130016022100101050000088.40
C14102012203204202110020003101484.40
C1503000100000000240000002004096.00
C16012200022000102221200004200084.80
C17000092014160013002111000200084.40
C181142382376014065221170000301068.00
C1900000000100000100024200015096.80
C2035010222010123012002240010089.60
C2110300107121151218561100172400068.80
C22202004216003259076300517400169.60
C23700000001100000100006002230289.20
C24000413100110005004220110206082.40
C2501310210000001000003100123694.40
P(%)80.8387.6173.5095.8576.5988.2891.3081.5588.9395.3875.3393.1579.5064.1387.9179.7084.4085.4389.9687.1680.3773.7394.4990.7594.7884.58
Table 6. Confusion matrix of classification results based on ADCC64.
Table 6. Confusion matrix of classification results based on ADCC64.
ClassC1C2C3C4C5C6C7C8C9C10C11C12C13C14C15C16C17C18C19C20C21C22C23C24C25P(%)
C120442000550300611103052350081.60
C2420420110002321315003016200081.60
C30016105001000007019400074303064.40
C422023300000000000000050008093.20
C50041189307001021700142007300075.60
C6010102140000000211040041002185.60
C711000023511200100006200000094.00
C8211090020400815140012010100081.60
C910000000239000010011010060095.60
C1000000000024700100000020000098.80
C1110101102002281931000011000091.20
C1200000100000233012001026100393.20
C13320000250010022400101020000089.60
C1400107303001222150002028201186.00
C1506000000000000236001004003094.40
C16013101010000100021000005000084.00
C170000133020050211002101000201084.00
C181023062210303063001194000500077.60
C1900000000100000100024300014097.20
C2042000210210000002002323001092.80
C21722040130031221640000189401075.60
C22201904104003036033300619300077.20
C2390000000610000000007002250290.00
C24100233100000023003190000213085.20
C2500020210000001100104120023594.00
P(%)81.2789.4770.9397.4977.7890.6894.3882.5994.8496.4886.0496.6881.1666.1591.8388.2487.5086.6192.0586.2576.8373.9594.9489.8797.1186.56
Table 7. Confusion matrix of classification results based on VGG.
Table 7. Confusion matrix of classification results based on VGG.
ClassC1C2C3C4C5C6C7C8C9C10C11C12C13C14C15C16C17C18C19C20C21C22C23C24C25P(%)
C120661000832310000136012330182.40
C282023220020114713124007000080.80
C30313109220002014135650093702152.40
C432023400100000000001100016193.60
C5006020210100042500146005301080.80
C600003221030002062060004002188.40
C741101023100101200105000002092.40
C811007112190010250114004001187.60
C910000000245010000011000010098.00
C1020000010024600000000010000098.40
C1121100100002330711011000100093.20
C1200000100000244011000001101097.60
C1300000001007023710111000100094.80
C14005012503002102032022005401381.20
C1504000300000100235001104010094.00
C16021900020000100022001004100088.00
C1700008401101104002243001200089.60
C18187102234304023014187001402274.80
C1910000000100000000024300005097.20
C2051010103051020302012223000088.80
C211094065030024013421100181500072.40
C22202606103121201126500018900275.60
C2360000000700000000111002340093.60
C24000516111001003002220000207082.80
C2511210100031000100004010023493.60
P(%)76.3084.1765.5096.3077.9986.6792.4088.6693.8794.2590.3191.7390.4681.8591.4483.0281.7578.9090.3396.9478.3575.0097.5090.0095.1286.88
Table 8. Confusion matrix of classification results based on VDCC64.
Table 8. Confusion matrix of classification results based on VDCC64.
ClassC1C2C3C4C5C6C7C8C9C10C11C12C13C14C15C16C17C18C19C20C21C22C23C24C25P(%)
C121641000401200001106018211186.40
C282053120110011114236006202082.00
C311158010011001004022440033802063.20
C434023400100000001001010005093.60
C51050207102000001201143004000082.80
C6101012200000000103051003102288.00
C770000022700000100108001104090.80
C800207102190030260007002100087.60
C910000000246000000011000100098.40
C1010000000024800000000000100099.20
C1120201101002300611002002100092.00
C1200100100000245001000002000098.00
C1320000001008023900000000000095.60
C14003010400000102221111002001388.80
C1505000200000000238000003002095.20
C16003502100000000020602004000082.40
C17002013101000002002240012400089.60
C1865407144403020013202002200080.80
C1901010000000000000023900018095.60
C2082001004121010002002253000090.00
C21551107001000408132100196303078.40
C22302607006000001014300419401077.60
C23100010000400000000004002301092.00
C24011225110000016003110000216086.40
C2512100210010011000101100023794.80
P(%)78.2687.2361.7297.9174.7391.6794.5890.5096.0998.0293.1297.6194.4782.5392.6186.1985.1780.1695.6096.5779.0377.2999.1487.1097.5388.37
Table 9. P@100 values obtained using the DPCA method.
Table 9. P@100 values obtained using the DPCA method.
ClassAlexnetAPCA1024APCA256APCA64APCA32VGGVPCA1024VPCA256VPCA64VPCA32
agricultural0.52410.34370.47800.53400.57500.69080.57300.65710.67850.6765
airport0.59790.31190.49460.60250.64580.55940.26800.48840.60460.6392
basketball court0.43840.31790.41780.45540.45600.34780.17290.32820.38780.3933
bridge0.83440.49130.72830.81940.85670.78310.42180.74000.81790.8358
building0.59650.37910.49540.56770.61360.51190.19070.44040.58440.6321
container0.66880.45360.56620.64510.68340.77180.37690.67940.75770.7841
fishpond0.82890.56680.72960.79320.81910.81600.47130.72110.81550.8306
footbridge0.62310.44200.53520.59940.62490.71770.34920.62790.73280.7601
forest0.91260.89780.88620.89530.90560.94070.92050.93020.93310.9355
greenhouse0.92340.79920.86240.89910.91430.94580.77230.89880.93500.9413
intersection0.67070.44880.55900.63510.67700.81960.51240.73450.79220.8227
oiltank0.80430.27540.56950.72780.78070.82230.06320.26010.60880.7353
overpass0.66240.39900.55230.65200.68980.77680.29720.65040.77570.8046
parking lot0.53760.35860.45310.52470.55840.57110.18220.44630.56140.6066
plane0.85020.64120.76680.83510.85700.77190.38590.67490.78980.8289
playground0.70810.48230.60790.66660.69850.70970.41330.61240.67610.6987
residential0.70200.55570.60760.65490.68560.73070.37740.62280.70340.7286
river0.46690.33530.42430.48920.51140.51150.32010.47390.55950.5846
ship0.88850.79020.83940.86440.87200.92450.75020.88220.90970.9076
solar power area0.75020.49660.61660.67620.71500.81570.59320.73260.78190.8028
square0.50820.34720.43480.49170.52150.51530.30940.46770.53010.5608
tenniscourt0.45150.26900.40010.47100.48410.49150.20420.41010.52930.5685
water0.87260.87640.84920.85950.86520.92490.92880.90560.90460.9100
wharf0.64760.35560.58350.67250.73210.51020.19890.48530.61640.6859
workshop0.86760.67540.77600.81820.84860.83780.46710.69460.78680.8275
Table 10. P@150 values obtained using the DPCA method.
Table 10. P@150 values obtained using the DPCA method.
ClassAlexnetAPCA1024APCA256APCA64APCA32VGGVPCA1024VPCA256VPCA64VPCA32
agricultural0.47220.29520.42060.47750.52310.64200.48010.59870.62630.6277
airport0.55310.25500.43130.54470.59490.50180.20870.41390.54620.5854
basketball court0.41720.27800.38270.42970.43340.32460.14320.29720.36530.3781
bridge0.80010.38820.65070.77220.82730.73090.31830.66100.77650.8032
building0.54960.32020.43450.51340.56890.46970.15310.38580.53660.5924
container0.60940.36060.48450.56890.61430.71270.28110.59710.70000.7301
fishpond0.79860.44730.65730.75000.78670.78170.37180.64740.77410.7951
footbridge0.57930.37720.47690.55150.58040.67770.26110.54880.68560.7181
forest0.88680.84220.84350.85960.87970.92810.88580.90990.92020.9237
greenhouse0.90320.71730.81510.86600.88890.93290.65450.85520.91780.9254
intersection0.62570.37290.49290.58170.62870.77900.39520.66190.74590.7837
oiltank0.76670.20630.44970.65340.72350.77550.04710.19050.48800.6449
overpass0.61500.33330.48730.59500.64190.72950.22300.55830.71710.7628
parking lot0.49800.30110.40290.48290.51470.52410.13760.37140.49870.5514
plane0.80740.54060.70040.78560.81600.71070.28690.58090.72740.7809
playground0.67070.39210.54150.61940.66140.66170.31640.53980.62580.6567
residential0.65780.45980.54020.60290.64330.68380.29000.54820.65020.6872
river0.42040.29160.37470.43640.45760.45810.26040.41370.50530.5309
ship0.86310.68810.79430.83840.85150.89340.61710.82690.88900.8893
solararea0.71390.38780.53510.61720.66370.76850.45880.64900.72360.7558
square0.46620.29610.38610.44420.47690.45800.25420.40500.46970.5042
tennis court0.41300.23170.35870.42790.44110.44260.16250.35420.47990.5250
water0.85540.84580.81450.83660.84460.92350.92740.89860.90120.9072
wharf0.60240.29470.51130.61910.68990.45490.15400.41140.55470.6382
workshop0.83870.56340.72000.77770.81550.79690.35120.59700.72570.7836
Table 11. Retrieval accuracies of the different dimensional features obtained using the DPCA method according to the P@100 values.
Table 11. Retrieval accuracies of the different dimensional features obtained using the DPCA method according to the P@100 values.
ModelAccuracyModelAccuracy
Alexnet69.35%VGG71.27%
APCA102449.24%VPCA102442.08%
APCA25660.94%VPCA25662.26%
APCA6467.40%VPCA6471.09%
APCA3270.37%VPCA3274.01%
Table 12. Retrieval accuracies of the different dimensional features obtained using the DPCA method according to the P@150 values.
Table 12. Retrieval accuracies of the different dimensional features obtained using the DPCA method according to the P@150 values.
ModelAccuracyModelAccuracy
Alexnet65.54%VGG67.05%
APCA102441.95%VPCA102434.56%
APCA25654.83%VPCA25655.69%
APCA6462.61%VPCA6466.20%
APCA3266.27%VPCA3269.92%
Table 13. Comparison of the retrieval mAP values of the DCC and DPCA methods.
Table 13. Comparison of the retrieval mAP values of the DCC and DPCA methods.
ModelmAPModelmAPModelmAPModelmAP
Alexnet0.5779VGG0.5795----
ADCC10240.6257VDCC10240.6203APCA10240.3108VPCA10240.2608
ADCC2560.6615VDCC2560.6524APCA2560.4258VPCA2560.4311
ADCC640.6630VDCC640.6659APCA640.5224VPCA640.5512
ADCC320.6475VDCC320.6030APCA320.5714VPCA320.5971

Share and Cite

MDPI and ACS Style

Xiao, Z.; Long, Y.; Li, D.; Wei, C.; Tang, G.; Liu, J. High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective. Remote Sens. 2017, 9, 725. https://doi.org/10.3390/rs9070725

AMA Style

Xiao Z, Long Y, Li D, Wei C, Tang G, Liu J. High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective. Remote Sensing. 2017; 9(7):725. https://doi.org/10.3390/rs9070725

Chicago/Turabian Style

Xiao, Zhifeng, Yang Long, Deren Li, Chunshan Wei, Gefu Tang, and Junyi Liu. 2017. "High-Resolution Remote Sensing Image Retrieval Based on CNNs from a Dimensional Perspective" Remote Sensing 9, no. 7: 725. https://doi.org/10.3390/rs9070725

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop