Next Article in Journal
Case Study of a Retrieval Method of 3D Proxy Reflectivity from FY-4A Lightning Data and Its Impact on the Assimilation and Forecasting for Severe Rainfall Storms
Next Article in Special Issue
Spatial Attraction Models Coupled with Elman Neural Networks for Enhancing Sub-Pixel Urban Inundation Mapping
Previous Article in Journal
Land Cover Mapping in Cloud-Prone Tropical Areas Using Sentinel-2 Data: Integrating Spectral Features with Ndvi Temporal Dynamics
Previous Article in Special Issue
Subpixel Mapping of Surface Water in the Tibetan Plateau with MODIS Data
 
 
Article
Peer-Review Record

Low-Rank Hypergraph Hashing for Large-Scale Remote Sensing Image Retrieval

Remote Sens. 2020, 12(7), 1164; https://doi.org/10.3390/rs12071164
by Jie Kong 1, Quansen Sun 1,*, Mithun Mukherjee 2 and Jaime Lloret 3
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Remote Sens. 2020, 12(7), 1164; https://doi.org/10.3390/rs12071164
Submission received: 11 February 2020 / Revised: 2 April 2020 / Accepted: 2 April 2020 / Published: 4 April 2020
(This article belongs to the Special Issue New Advances on Sub-pixel Processing: Unmixing and Mapping Methods)

Round 1

Reviewer 1 Report

The authors present a new method based on low-rank hypergraph hashing for remote sensing image retrieval. The paper is well-structured, however I have some concerns regarding the content as follows:

-How does the proposed method address each state-of-the-art problem specifically (see line 65 page 2).

- The author should write out the novelties in regard to state-of-the-art.

- How does the method compare to locality sensitive hashing methods, the authors should consider also state-of-the-art of LSH for RSIR task. The are many different LSH methods available (not just random projections).

- What is motivation to use gradient descent with curvilinear search instead of other optimisation algorithms ?

- All parameters need to be specified for the methods used in the results section.

 

Author Response

Dear anonymous reviewer:

 

Thank you very much for your time and effort in handling our manuscript. We would also like to express our sincere thanks to the reviewers for their time and efforts in reading our manuscript and providing helpful suggestions. We hope that we have adequately answered all the queries. Our responses to the specific comments are given below. Specially, the main revised parts are highlighted with yellow in the revised manuscript, and the main deleted parts are highlighted with red in the revised manuscript.

 

The rest of this letter is our point-by-point responses to the comments raised by the reviewers. The comments are reproduced using italic and our responses are given directly afterward in a normal font.

 

Best regards,

Jie Kong, Quansen Sun, Mithun Mukherjee, and Jaime Lloret

 

 

 

Reply to reviewer #1:

Comment 1:  -How does the proposed method address each state-of-the-art problem specifically (see line 65 page 2).

- The author should write out the novelties in regard to state-of-the-art.

Reply: Large variations are usually contained in the remote sensing (RS) images due to large data volume, small object size, and rich background. Hashing learning is useful for large-scale retrieval due to its superiority in terms of computation and storage. However, the relationships among the RS images are more complex and high-order, the existing hashing methods only utilize the pairwise similarity to capture the relationships among data, such that they cannot effectively capture the high-order information of RS images. Conversely, we introduce hypergraph learning for local grouping information extraction that is beneficial to clustering. And, owing to the rich background, we introduce low-rank representation for filtering unrelated features and reduce correlation. So, we proposed the LHH method by introducing the low-rankness and hypergraph structure to supervised hashing in RSIR application. Compared with the state of the arts, the LHH method can improve RSIR precision by reducing the correlation of matrix and mining the high-order internal structure of data.

 

Comment 2:  - How does the method compare to locality sensitive hashing methods, the authors should consider also state-of-the-art of LSH for RSIR task. The are many different LSH methods available (not just random projections).

Reply: We have added the LSH method for comparison, and the experimental results are given as follow.

 

 

Table 3. Comparison of mAP with different hash code lengths on UCMD

        Method

Code length

LSH

SH

PRH

HSH

KSH

SDH

LHH

8-bits

0.1256

0.1394

0.1354

0.2395

0.2791

0.2684

0.3102

16-bits

0.1375

0.1479

0.1483

0.2468

0.2913

0.2897

0.3385

32-bits

0.1421

0.1534

0.1594

0.2587

0.3147

0.3214

0.3573

64-bits

0.1476

0.1589

0.1718

0.2671

0.3278

0.3348

0.3760

Table 4. Comparison of mAP with different hash code lengths on SAT4

        Method

Code length

LSH

SH

PRH

HSH

KSH

SDH

LHH

8-bits

0.3147

0.3086

0.3980

0.5551

0.5037

0.5467

0.6046

16-bits

0.3175

0.3149

0.4023

0.5632

0.5112

0.5589

0.6295

32-bits

0.3197

0.3218

0.4095

0.5729

0.5268

0.5727

0.6573

64-bits

0.3209

0.3256

0.4078

0.5756

0.5189

0.5671

0.6922

Table 5. Comparison of mAP with different hash code lengths on SAT6

        Method

Code length

LSH

SH

PRH

HSH

KSH

SDH

LHH

8-bits

0.3387

0.3242

0.5054

0.6237

0.5941

0.6189

0.6592

16-bits

0.3438

0.3317

0.5117

0.6319

0.6011

0.6348

0.6727

32-bits

0.3417

0.3478

0.5172

0.6402

0.6147

0.6514

0.7035

64-bits

0.3422

0.3449

0.5184

0.6446

0.6214

0.6621

0.7333

Table 6. Comparison of mAP with different hash code lengths on CIFAR10

        Method

Code length

LSH

SH

PRH

HSH

KSH

SDH

LHH

8-bits

0.1214

0.1648

0.2185

0.3919

0.3726

0.4367

0.4618

16-bits

0.1256

0.1687

0.2256

0.4031

0.4017

0.4613

0.4845

32-bits

0.1302

0.1724

0.2336

0.4163

0.4225

0.4967

0.5034

64-bits

0.1287

0.1693

0.2312

0.4203

0.4412

0.5111

0.5259

 

Figure 6. P-R curves of several hashing methods on (a) UCMD (b) SAT4 (c) SAT6 (d) CIFAR10

 

Comment 3:  -What is motivation to use gradient descent with curvilinear search instead of other optimisation algorithms?

Reply: Because the optimization of variable A is a non-convex problem with orthogonal constraints, most optimization methods cannot solve this problem. Inspired by [44], we solve the proposed algorithm with orthogonality constraints using a feasible way.

 

Comment 4:  -- All parameters need to be specified for the methods used in the results section.

Reply: There are seven parameters: , , ,,, , ,  and . More specifically,  is the dataset size,  is sample dimensionality,  is the number of class,  is hash code length,  is the rank of projection matrix,  is the hyper-edge number,  and  are the hyperparameters of sparse regularization term and hypergraph regularization term, respectively, and  is the number of total iterations. , , ,  are shown in Table 2.  is 512 which is presented in section 4.2.  is analyzed in Tables 3-6. ,  and  are emprically set as 8, 0.01, 1, respectively, which are analyzed in sections 4.4 and 4.5.

Reviewer 2 Report

I have no more comments

Author Response

Thanks for your review.

 

We have improved the paper based on the comments provided by reviewer 1 and reviewer 3.

Reviewer 3 Report

The paper presents a hashing method for image retrieval. Experimental results showed that the presented approach outperforms existing methods. Thus, the efficiency of the method was proved. But the paper requires some improvements:

  • There are a lot of typos and grammar errors, e.g.

l. 37:  exact => extract

l. 80: presednted  => presented

l. 314, 354:  retrieval => retrieve 

  •  experimental result refer to precision only, results presenting recall would be interesting too, why it is not evaluated?
  •  information about computational complexitu would be also valuabe (e.g. runtime required for computations). 

Author Response

Dear anonymous reviewer:

 

Thank you very much for your time and effort in handling our manuscript. We would also like to express our sincere thanks to the reviewers for their time and efforts in reading our manuscript and providing helpful suggestions. We hope that we have adequately answered all the queries. Our responses to the specific comments are given below. Specially, the main revised parts are highlighted with yellow in the revised manuscript, and the main deleted parts are highlighted with red in the revised manuscript.

 

The rest of this letter is our point-by-point responses to the comments raised by the reviewers. The comments are reproduced using italic and our responses are given directly afterward in a normal font.

 

 

Best regards,

Jie Kong, Quansen Sun, Mithun Mukherjee, and Jaime Lloret

 

 

Comment 1:  -There are a lot of typos and grammar errors

Reply: According to your comments, we have revised the paper seriously, especially some typos and grammar errors.

 

Comment 2:  - experimental result refer to precision only, results presenting recall would be interesting too, why it is not evaluated?

Reply: In our paper, we have given the Precision-Recall Curve which can also reflect the algorithm performance and the recall variation.

 

Comment 3:  - information about computational complexity would be also valuable (e.g. runtime required for computations). 

Reply: Thank you very much for your good suggestions to this manuscript. We have added the computational complexity analysis in section 3.5. The details are given as follows.

Then, we present the computational complexity of the proposed LHH method. The computational complexity of LHH mainly consists of the following several parts. In the step of updating , its complexity is . In the step of updating , due to the orthogonal constraint, we use Cayley transform for solving this problem. Computing the gradient of  requires  and updating  for each iteration is  [44]. Thus, the complexity of optimizing  is , where  is the number of iterations for updating . In the step of updating , its complexity is  and it is time-consuming, because it contains hypergraph Laplacian matrix computing. In summary, the total computational complexity of LHH is , where  is the number of total iterations in Algorithm 2. Finally, the computational complexity of hashing mapping matrix  requires the time complexity of . For the query part, the computational cost for encoding any query  is .

 

Back to TopTop