Next Article in Journal
Evaluating the Spatial Representativeness of the MODerate Resolution Image Spectroradiometer Albedo Product (MCD43) at AmeriFlux Sites
Next Article in Special Issue
Tidal and Meteorological Influences on the Growth of Invasive Spartina alterniflora: Evidence from UAV Remote Sensing
Previous Article in Journal
Estimating Rice Agronomic Traits Using Drone-Collected Multispectral Imagery
Previous Article in Special Issue
Spatially Explicit Mapping of Soil Conservation Service in Monetary Units Due to Land Use/Cover Change for the Three Gorges Reservoir Area, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Homogeneity Distance Classification Algorithm (HDCA): A Novel Algorithm for Satellite Image Classification

1
Department of Remote Sensing and GIS, University of Tehran, Tehran 1417853933, Iran
2
Department of Civil and Surveying Engineering, Graduate University of Advanced Technology, Kerman 7616914111, Iran
3
College of the Environment & Ecology, Xiamen University, South Xiangan Road, Xiangan District, Xiamen 361102, Fujian, China
4
Center for Urban and Environmental Chang, Department of Earth and Environmental Systems, Indiana State University, Terre Haute, IN 47809, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(5), 546; https://doi.org/10.3390/rs11050546
Submission received: 3 February 2019 / Revised: 27 February 2019 / Accepted: 1 March 2019 / Published: 6 March 2019
(This article belongs to the Special Issue Remote Sensing for Terrestrial Ecosystem Health)

Abstract

:
Image classification is one of the most common methods of information extraction from satellite images. In this paper, a novel algorithm for image classification based on gravity theory was developed, which was called “homogeneity distance classification algorithm (HDCA)”. The proposed HDCA used texture and spectral information for classifying images in two iterative supplementary computing stages: (1) merging, (2) traveling and escaping operators. The HDCA was equipped by a new concept of distance, the weighted Manhattan distance (WMD). Moreover, an improved gravitational search algorithm (IGSA) was applied for selecting features and determining optimal feature space scale in HDCA. In the case of multispectral satellite image classification, the proposed method was compared with two well-known classification methods, Maximum Likelihood classifier (MLC) and Support Vector Machine (SVM). The results of the comparison indicated that overall accuracy values for HDCA, MLC, and SVM are 95.99, 93.15, and 95.00, respectively. Furthermore, the proposed HDCA method was also used for classifying hyperspectral reference datasets (Indian Pines, Salinas and Salinas-A scene). The classification results indicated substantial improvement over previous algorithms and studies by 2% in Indian Pines dataset, 0.7% in the Salinas dataset and 1.2% in the Salinas-A scene. These experimental results demonstrate that the proposed algorithm can classify both multispectral and hyperspectral remote sensing images with reliable accuracy because this algorithm uses the WMD in the classification process and the IGSA to select automatically optimal features for image classification based on spectral and texture information.

1. Introduction

Image classification is one of the most common methods of information extraction from satellite images. Various methods have been developed for satellite image classification. Generally, these methods can be divided into supervised and unsupervised categories [1]. With the development of remote sensing and image processing techniques, a wide variety of supervised and unsupervised methods have been proposed to improve the accuracy of classification, which include Maximum likelihood [2], Artificial neural network [3], Support vector machine [4], Alternating decision tree [5], Attribute bagging [6], Large margin nearest neighbor [7], Nearest centroid classifier [8] and tensor-based methods [9,10]. All methods of image classification for implementation require image features, which denotes types of homogeneity classes; in other words, classification is performed based on feature measurements taken from an image that could be spectral, texture, spatial, shape, geometric properties, and/or some statistical measures [11]. Image classification can be done using a combination of several features. Selecting the proper combination of features for implementation is a major challenge in classification [12]. It is worth noting that most conventional methods of image classification have no obvious strengths to classify images with different spatial and spectral features [13].
Recently, there has been a great interest in heuristic and meta-heuristic algorithms for image classification and related fields [14]. The majority of heuristic and metaheuristic algorithms are inspired by natural or biological phenomena [11]. A comprehensive review of heuristic and metaheuristic algorithms can be found in [11,15,16]. Genetic Algorithm has been applied to image classification in [17]. In [18] Ant Colony Optimization has been presented as a method for clustering analysis. In [19], for clustering problem, a genetic k-means algorithm has been proposed. Clustering approaches based on a neural gas algorithm have been presented in [20]. Particle Swarm Optimization has been proposed for image classification in [21]. Recently, convolutional neural networks (CNN) and deep CNN are of interest to researchers in the field of image classification [22,23,24,25,26,27,28].
In the present study, a novel image classification algorithm based on natural gravity, which is inspired by Newton’s physics theory, is presented, which is called s “homogeneity distance classification algorithm (HDCA)”. The proposed HDCA uses texture and spectral information for classifying images. Some distinct features of the proposed HDCA include:
(i)
Some HDCA operators are combined with stochastic features; therefore, the algorithm can consider more similar pixels for assigning to a class;
(ii)
In order to achieve the high accuracy in classification of different images, a new and unique concept of distance in classification process is used;
(iii)
Automatic selection of optimal features for image classification based on spectral and texture information; and
(iv)
It is possible to separate more homogeneous classes by determining the optimal scale of feature space with the new optimization method.
The remainder of the paper is organized as follows. Section 2 introduces relevant works and a brief introduction about the law of gravity. Also, in Section 2 the proposed HDCA is introduced and the optimization of features space scale is described. Experimental results are presented in Section 3. The discussion is reported in Section 4, followed by conclusions in Section 5.

2. Data and Method

2.1. Data

To evaluate and validate the performance of the HDCA algorithm, two categories of satellite imagery (multispectral and hyperspectral images) were used. IKONOS satellite image for the Shahriar region (Tehran), 2013, which has 4 spectral bands Red, Green, Blue and Near Infrared with spatial resolution of 4 meter and the size of 400 × 400 pixels was used. Also, hyperspectral reference datasets including the Indian Pines dataset, Salinas, and Salinas-A dataset, acquired by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) sensor, were used. The Indian Pines hyperspectral dataset consisted of a 2.9 × 2.9 km2 (145 × 145 pixels) land of Northwest Tippecanoe County in the Indian Pine region and was collected on 12 June 1992. The Salinas dataset contained 224 spectral bands with a spatial resolution of 3.7 m and was acquired by the AVIRIS sensor over Salinas Valley, California, USA. The size of image is 512 × 217 pixels. The Salinas-A scene dataset is a subset of the Salinas dataset which includes 224 spectral bands. The size of image is 86 × 83 pixels.

2.2. Background

According to Newton’s law of universal gravitation, all objects attract each other with a force of gravitational attraction [29,30]. The force of gravitational attraction between two objects has an effect on the joint line of two particles, which is dependent upon the mass of both objects directly and is related to the square of the distance between two objects inversely [30]. Newton’s conclusion about the amount of these forces is expressed as Equation (1).
Force i j = G M i   M j R i j 2
where Force i j is the magnitude of gravitational force between two objects i and j, and M i and M j are the masses of objects i and j, respectively. G is the universal gravitational constant, and R i j is the distance separating i and j objects’ centers.
Inspired by the Newtonian law of gravity, a gravitational clustering algorithm was first suggested by Wright (1977) [31] and has been studied substantially in [11,32,33,34,35,36,37]. One of these studies is reviewed in this section, which our proposed method is similar to. In Rashedi and Nezamabadi-pour (2013) [11], a model of the gravitational clustering in RGB space was used for color image segmentation. In the process of segmentation, image pixels were mapped to RGB color space, context and spatial information of pixel, and each pixel was considered a particle equal to the mass of one. All particles exert gravitational force with respect to each other. The gravitational attraction model between two particles i and j was defined as (Equation (2)):
Force i j = G M i   M j R i j + ε ( z j z i )
where z i and z j are the feature vector of particle i and j, respectively. This causes particles (pixels) to move in RGB space. Particles moving to one place are merged into a new particle, whose mass is considered as the sum of the two particles mass. This method is an unsupervised segmentation algorithm for color images based on gravitational law. It is an iterative method which contains three operators called traveling, merging, and escaping. Executing these operators is continued until the certain number of iterations is reached. Our proposed algorithm has some similarities to this algorithm theoretically, but there are major differences in calculation and operator performance that will be discussed below.

2.3. The Proposed Algorithm: HDCA

HDCA is a supervised classification for remote sensing images based on the gravity law. The proposed algorithm is an iterative method which contains three operators called traveling, merging, and escaping. The implementation of these operators is repeated in two supplementary computing stages until stopping criteria are met for every stage. Based on traveling operator, agents (pixels) move in the feature space under the influence of training pixels gravitational force. They can find other similar agents by traveling. In the merging operator, the agents are merged with the nearest agents to each other based on the rule that only one of these agents contains training data. In other words, the merging operator will merge unlabeled pixels with agents that are included in the training data. Finally, using the escaping operator, unlabeled pixels escape from their clusters with a probability that is proportional to their distance from their cluster’s center. The escaped pixels are absorbed by nearest clusters. It is noted that the algorithm does not cycle as repeating traveling- merging-escaping consecutively, but two traveling and merging operators repeat consecutively as long as the first stopping criterion is met and then the escaping operator begins to repeat until the second stopping criterion is met. The overall flowchart of the proposed method is shown in Figure 1.

2.3.1. The Procedure of Image Mapping into Feature Space

First, a satellite image is mapped into a feature space in which the masses move to find similar pixels. As mentioned before, in order to acquire an accurate classification, the use of different features, in addition to spectral information, is essential. In the proposed HDCA method, we define a feature space containing three features (dimensions) per spectral band. The first dimension indicates the spectral information in each band of the original image. The next two dimensions are the variance and inertia of pixels located in a small region (window) around a pixel in each band of the original image.
In other words, each agent i (pixel) in an image with an L band is described as follows:
Z i = ( S i d   Var i d   Inr i d   for   d = 1 , , L )   for   i = 1 , , N
where N is the total number of pixels in the image, S i d represents th spectral component values of agent i (pixel) in band d, and Var i d , Inr i d represent variance and inertia of agent i in band d, respectively. Each pixel is first considered as a particle or an agent. By the traveling operator, particles move under the gravitational force of the particles that contain training data. The number of agents is decreased after each iteration by the merging operation.

2.3.2. Traveling Operator

At the traveling stage, particles move based on the gravity force and search the feature space to find a similar agent. This step is performed based on the gravitational and motion laws. In order to calculate the agent acceleration, the sum of forces that other particles applied on an agent is calculated using Equation (4), followed by the agent acceleration calculated by using the law of motion (Equation (5)). Theoretically, three types of masses can be defined for each object: (1) Active gravitational mass (Ma) is a measure of the power of the gravitational field related to a particular object. (2) Passive gravitational mass (Mp) is a measure of the strength of an object’s interaction with the gravitational field. An object with a smaller passive gravitational mass faces a smaller force compared to an object with a larger passive gravitational mass in the same gravitational field. (3) Inertial mass (Mi) is a mass parameter giving the inertial resistance to acceleration of the object when a force is applied. An object with large and small inertial masses changes its motion more slowly and rapidly, respectively [37].
It is noted that active gravitational mass is considered only for training pixels in these methods. Particles without labels have Passive gravitational mass. As a result, unlabeled particles have no force applied onto other particles; therefore, only training pixels can attract other particles. The active gravitational mass is calculated through Equation (6) for training pixels. According to Equation (6), the similarity between features of agents is the only criterion for traveling, and the feature space which is followed by merging, rather than the number of training data for the different classes. Therefore, the difference in the number of training data in different classes does not affect the performance of the algorithm. Finally, the velocity of each dimension of an agent is calculated by adding the current acceleration to a fraction of previous velocity (Equation (7)). One of the characteristics of heuristic search algorithms is stochastic search. By adding the stochastic search, the proposed algorithm will have the ability to some extent overcome the weakness of the inadequate training data and search the image to find the best areas for the proper merge. Also, this stochasticity facilitates the algorithm to avoid local minima and explore in a more efficient way the space. In order to keep the random characteristic in Relation (7), the random fraction has been used. Finally, the agent position is updated using Equation (8).
force i d ( t ) = j k c l o s e i G ( t ) M a j ( t ) M p i ( t ) ( 1 + R i j ( t ) ) 2 ( z j d ( t ) z i d ( t ) )
a i d ( t ) = force i d ( t ) M i i ( t ) =   j k c l o s e i G ( t ) M a j ( t ) ( 1 + R i j ( t ) ) 2 ( z j d ( t ) z i d ( t ) )
M j ( t ) = 1 n c j
v i d ( t + 1 ) = rand i ·   v i d ( t ) + a i d ( t )
z i d ( t + 1 ) = z i d ( t ) + v i d ( t + 1 )
where M a j is the active gravitational mass of agent j, M p i is the passive gravitational mass of agent i, M i i is the inertial mass of ith agent and R i j ( t ) is unique concept of the Manhattan distance between two agents i and j in the feature space, which is called the Weighted Manhattan Distance (WMD) (Appendix A proves why WMD is more appropriate than other distance metrics). However, it is important to note that the feature space should be normalized (the feature space should be transferred to the range between 0 and 1), then the distance between agents is calculated by Equation (9). k c l o s e is a set of k closest agents containing training data relative to agent i which apply forces on the ith agent and pulls it. n c j is the total number of training pixels assigned to the class related to agent j. The use of the unique concept of distance is additional distinctive properties of the proposed algorithm. In this paper, the gravitational constant, G , is assigned to a constant value during iterations.
R i j ( t ) = d = 1 L μ 1 d   | S j d S i d | δ S c j d + μ 2 d   | Var j d Var i d | δ Var c j d + μ 3 d   | Inr j d Inr i d | δ Inr c j d
where S j d is the spectral value of pixel j in band d and Var i d , Inr i d are variance and inertia values of pixels located in a small window surrounding pixel j in band d, respectively. δ S c j d is the standard deviation of spectral value of all training pixels assigned to class related to agent j in band d and δ Var c j d , δ Inr c j d is the standard deviation of variance and inertia values in a small window around all training data assigned to class related to agent j in band d, respectively. μ 1 d , μ 2 d and μ 3 d coefficients are the weight and scale of each feature, which uses a new optimization method to calculate its best values.

2.3.3. Merging Operator

In the merging operator, unlabeled agents or non-training agents will be merged with agents that include training data. At this stage, if the unlabeled agent and agent with training pixels are the closest agents to each other, they will be merged and create a new labelled agent. However, two agents cannot be merged together if both contain training data or if both contain no training data. Since unlabeled agents do not have mass, the location, mass, velocity, and label of the new agent are taken from the agent with training data.
During the merging operator, the number of agents is decreased by increasing iterations. A stopping criterion to controlling the iterative circle of traveling- merging is the number of agents. When the number of agents equals the total number of training data, iterative circles will stop; and after the final merging, the escaping stage will happen. In the final merging, all agents with same training data become a new agent. As a result of this process, the number of agents would equal to the number of classes.

2.3.4. Escaping Operator

One of the problems of image clustering and classification based on regional growth algorithms is to create incorrect sequential clusters due to incorrect merging [13]. To solve this problem, the random concept has been introduced to obtain the velocity of agents through the traveling process, but the classification accuracy is influenced by incorrect merging. In order to meaningfully remove this problem, the escaping operator is introduced in HDCA.
This stage starts after the end of the merging-traveling circles. It is necessary that before running this stage, pixels are returned from moved (transferred) features to the same initial features and then the center location of every cluster is determined. All training and unlabeled pixels have equal weight in computing the new center location of clusters. At the escaping stage, all calculations are done on the basis of initial features.
Each of the unlabeled pixels escaping from their corresponding clusters with a probability of P e i k is calculated using the distances from their corresponding cluster centers, but training pixels cannot escape from their clusters. The escaped pixels are absorbed by the closest cluster in the feature space. The escape probability of pixel i to become free from cluster k is calculated by Equation (10), where r i k is the distance between pixel i and cluster center k (agent k ), and d m i n k , d m a x k are the distances nearest and farthest pixel to the cluster center, respectively. Pixels close to the cluster center would be released with less probability, while the pixels far from the cluster center would escape with higher probability. It is noteworthy that the location of the cluster center is calculated two times during an iteration of escaping stage, before and after absorbing the escaped pixels.
P e i k = ( r min k d min k d max k d min k ) 1 p
where p is the escape power of pixels. Higher p values lead to higher escape probability. The order of distance is WMD at the escaping stage. This distance is used to calculate standard deviations from the training data values of the cluster member. It should be noted that a pixel close to the center with the size of d min would never escape and that far from the center with the size of d max would be released. One advantage of the designed escaping operator is no need to determine the threshold by the user for pixels to become free, in addition to the random concept. The stopping criteria of escaping operator can be one of the conditions below.
(1)
Stopping after a certain number of iterations; and
(2)
Stopping after fixing classes: the operator stops when no change is observed in classes.

2.4. Optimization of Feature Space Scale

Optimization of the feature space is an important stage of pre-processing to accomplish image classification. In the gravitational classification, coefficients μ 1 d , μ 2 d and μ 3 d determine the weight and scale of each feature, which are used to compute WMD in the feature space. For example, if μ 1 d increases, the spectral data weight of band d will increase, which has more effect on the classification. Values of these coefficients in different images have different effects to achieve homogenous classes. To determine the optimum value of these coefficients, they should be chosen by considering images automatically. It is crucial that coefficients can be determined automatically to distinguish between classes, with the most similarity of elements within classes and the least similarity between classes. In this paper, the Improved Gravitational Search Algorithm (IGSA) is used to optimize coefficients as well as the particular feature management function in order to achieve accurate image classification. This method can calculate coefficient values μ 1 d , μ 2 d and μ 3 d by considering training data to obtain the most separability of classes.

2.4.1. Background

The IGSA algorithm, like the GSA algorithm [37], is inspired by the law of gravity. According to the law of gravity, every object is able to understand location and situation of the other objects through the law of gravitational attraction. The optimum region attracts the objects like black holes. Thus, this force can be utilized as an instrument to exchange information. The designed optimum finder can be used to solve every optimization problem, in which each solution is defined as a situation in space, and its similarity to the other solutions is expressed as a distance. The amounts of masses are determined by the objective function. Major reasons for applying the IGSA algorithm instead of other algorithms are listed as follows.
  • The efficiency of most optimization algorithm is determined by the initial position of particles. This means that if the initial population does not cover some parts of space, finding the optimum region will be difficult. Our proposed algorithm (IGSA) is able to remove this problem with negative mass.
  • The time to achieve the optimum solution is short.
  • Other benefits of our proposed algorithm is using a kind of memory. The memory helps find the optimum solution properly.
  • The memory of the algorithm and using the negative mass dramatically decrease the possibility of trapping the algorithm in the local optimum, so the memory can be achieved the solution convergence in global optimum at a low number iterations.

2.4.2. The Proposed Algorithm (IGSA)

Consider a system with N agents that their performances are based on their masses. Each agent has the same coefficient, μ 1 d , μ 2 d and μ 3 d ( f o r   d = 1 , 2 , , M where M is the number of bands), which is a point in the space or a solution of the problem. The position of the ith agent in the kth dimension is shown by x i k (Equation (11)).
X i = ( x 1 i d   x 2 i d   x 3 i d   for   d = 1 , , M )   for   i = 1 , , N
where N is the total number of pixels in the image. In this system, the law of gravity is not only based on attraction but also uses the repulsion that is in the results of negative mass. In IGSA, in addition to positive force (attraction) where top members of the community (set of agents with greater mass) import each agent in each dimension, a negative force (distraction) also enters each agent in each dimension by the poorer members of community (set of agents with smaller mass). In order to give memory to this algorithm, the positive and negative force should be considered towards the best and worst position for each agent that receive the force. Agents with positive or negative mass apply their force to the other agents, which lead to their general motion toward objects that have the best solutions. In other words, during iterations, it is expect that masses be attracted by the heaviest mass on the positive side that presents an optimum solution in the search space.
In this system, at a specific time ‘ t ’, the force acting on agent ‘ i ’ from agent ‘ j ’ in dth dimension is f i j d ( t ) . the amount of this force can be calculated by Equation (12), where M a j , M p i are active gravitational mass of particle j and passive gravitational mass of particle i, respectively, G(t) is the gravitational constant at time t, ε is a small constant and R i j is the Euclidian distance (2-norm distance) between two agents i and j which is calculated according to Equation (13).
f i j d ( t ) = G ( t ) M p i ( t ) · M a j ( t ) R i j ( t ) + ε ( x j d ( t ) x i d ( t ) )
R i j ( t ) = X i ( t ) · X j ( t )
According to Equation (14), the total force that acts on agent i in a dimension d at time t ( f i d ( t ) ) is equal to the randomly weighted sum of the positive forces from the better agents k, randomly weighted sum of the negative forces from the worst agents h, and the positive and negative forces from the best and worst position for each agent that receive the force, respectively. The force acting on the best (pbest) and worst (pworst) position for each agent in each dimension is calculated using Equations (15) and (16).
f i d ( t ) = j k b e s t ,   j i   rand j   f i j d ( t ) + t h w o r s t ,   t i   rand t   f i t d ( t ) + rand p b f i   p b d ( t ) + rand p w f i   p w d ( t )
f i p w d ( t ) = G ( t ) M p i ( t ) · M a p w o r s t ( t ) R i j ( t ) + ε ( x p w o r s t d ( t ) x i d ( t ) )
f i p b d ( t ) = G ( t ) M p i ( t ) · M a p b e s t ( t ) R i j ( t ) + ε ( x p b e s t d ( t ) x i d ( t ) )
In these Equations, rand is a uniformly distributed random number in the interval [0, 1], M a p w o r s t , M a p b e s t are active gravitational mass of the best and worst positions for each agent, respectively, and M p i is the passive gravitational mass of the agent i. The values of K and h are variable over time in order to control compromise between exploration and exploitation. The algorithm needs to do a proper search and exploration in the first iterations, but as time passes the population will achieve better results and the problem will require exploitation. Therefore, at the beginning, the K value is defined as almost all agents (95% of the best community), and as time passes, the number of absorbents decreases linearly until at the end there will be just one agent that attracts the others agents of the population.
The h value has different procedures, as at the beginning, only 5 percent of the worst population members repulse the others, and as time passes, the number of repellents is increased linearly until at 25 percent of iterations there will be 30 percent of the worst population members and after this iteration, the h value is decreased linearly until at 50 percent of iterations there will be none of the worst population members. Also, the total force from the best (pbest) and worst (pworst) positions acting on each agent are applied until at 75 percent of iterations and after this iteration, the total force from pbest and pworst positions is not applied on the agent. Implementation of this procedure leads to controlling exploration and exploitation ideally.
According to Newton’s second law, each agent in dimension d has an acceleration that is proportional to the entered force on it in dimension d and inversely proportional to inertia mass of it, that is expressed in Equation (17). The acceleration of the agent i in dimension d at time t and the inertial mass of agent i are shown as a i d ( t ) and M i i , respectively [37,38].
a i d ( t ) = f i d ( t ) M i i ( t )
In the IGSA algorithm, all values of inertial mass, passive gravitational mass, and active gravitational mass for every agent are considered equal. These values are calculated by the objective (fitness) function for mass of each agents of kbest, hworst, pbest and pworst with separate equations (Equations (18)–(21)). In these equations, a greater mass is assigned to objects with e better fitness that leads to greater effectiveness. Thus, agents with better fitness have more mass in the positive direction, and can therefore attract the other agents, and vice versa.
M ikb ( t ) = fit i ( t ) ( worst _ so _ far ) j kbest . fit j ( t ) ( worst _ so _ far )
M ihw ( t ) = h k fit i ( t ) ( best _ so _ far ) j hworst . ( best _ so _ far ) fit j ( t )
M ipb ( t ) = fitbest i ( worst _ so _ far ) j = 1 N fitbest j ( worst _ so _ far )
M ipw ( t ) = fitworst i ( best _ so _ far ) j = 1 N ( best _ so _ far ) fitworst j
where the best-so-far and worst-so-far values are the best and the worst solutions of all population, respectively, the fitbest i and fitworst i values are the best and the worst solutions of the agent i, respectively, fit i ( t ) represent the fitness value of agent i at time t. According to the equations, it is clear that values of masses of pworst and hworst ( M ipw and M ihw ) are negative, which lead to repulsion.
Furthermore, the next velocity of an agent is considered as a fraction of its current velocity added to its acceleration. Therefore, the position and velocity of agent i in dimension d could be calculated as Equations (22) and (23) [37,38].
v i d ( t + 1 ) = rand i ·   v i d ( t ) + a i d ( t )
x i d ( t + 1 ) = x i d ( t ) + v i d ( t + 1 )
where rand i , rand j are uniform random variables in the interval [0, 1] that give a randomized characteristic to the search.
According to Equation (24), the gravitational constant, G, is a function of the initial value ( G 0 ) and time (t). In the GSA algorithm, an exponential equation is used to reduce the gravitational constant (Equation (24)) [37].
G ( t ) = G 0 e α t T
where G 0 is the initial gravitational constant, α is a positive constant, and T is the total iterations of the algorithm or the total age of system. At the beginning of system formation, each object (agent) is placed in a point of space randomly that is a solution to the problem. At each iteration, the agents are evaluated and values of G, pbest, pworst, best-so-far and bad-so-far are updated. Then, the gravitational mass, the gravitational force acting, the acceleration and the velocity of each agent are calculated. Finally, the next position of each agent is calculated and objects are placed in new positions. Figure 2 shows the optimization algorithm.

2.4.3. The Objective Function (Fitness)

The objective function for optimization in IGSA is defined by Equation (25), which is considered to determine the scale of feature space, were R i M i is the distance of the ith training sample from its cluster center and R i M nearest is the distance of the ith training sample from the nearest neighbor cluster center. These distances are WMD.
F obj =   i = 1 : T R i M i R i M nearest
where T is the total number of selected training samples. The purpose of this function is the search in space of all possible states for the coefficients μ 1 d , μ 2 d and μ 3 d in order to optimize the objective function. This objective function can carry out the balance of feature space, as if the maximum homogeneity can be established among the members of a class and maximum heterogeneity among the members of different classes. Afterwards, effective feature vectors and their scale can be predicted. These help to access the high accuracy for classification.

2.5. Comparison with Other Methods

The accuracy assessment of the HDCA for the Indian Pines, Salinas, and Salinas-A datasets are compared with other traditional and deep learning-based classification methods. The same training dataset was used in the training phase for all the methods and the same test dataset was used in the test phase for a uniform comparison. The k-NN and SVM with random feature selection [39] were selected as traditional classifiers. For each dataset, k-NN was employed with k equal to the number of classes. SVM with random feature selection was applied according to Waske et al. [39] with RBF kernel. The RBF kernel parameters (i.e., C and γ) were set by cross validation. The multilayer perceptron (MLP), the CNN designed by Hu et al. [40], and the CNN with pixel-pair features (PPFs) designed by Li et al. [41] were selected as deep learning-based classifiers. The MLP was used with base learning rate 0.0005 and batch size 200. In the CNN, the algorithm parameters were set equal to the values noted in [40]. PPF was applied with the same settings described by Li et al. [41].
Finally, the results of the hyperspectral reference dataset classification using the HCDA algorithm were compared with the results of other algorithms in previous studies [13,42,43,44,45,46,47,48,49,50,51,52].

3. Experimental Results

3.1. Multispectral Image

In order to evaluate the accuracy of HDCA compared to MLC (Maximum Likelihood classifier) and SVM (Support Vector Machine), an IKONOS satellite image was used. MLC and SVM are the most common methods of satellite image classification [53,54]. The MLC is one of the most powerful parametric statistical methods, while the SVM is one of the non-parametric methods that has been applied successfully to image classification in recent years [55,56,57,58,59,60,61]. The SVM classifier was employed using a radial basis function (RBF) kernel. For the classification of IKONOS satellite image, RBF kernel parameters (i.e., C and γ) were set to 250 and 3. The IKONOS satellite image for the Shahriar region (Tehran), 2013, which has 4 spectral bands, Red, Green, Blue and Near Infrared with spatial resolution of 4 meter and the size of 400 × 400 pixels, was used. The color composite of original image is shown in Figure 3. The image was classified into 5 classes: bare land, building, road, tree and farmland. The bare land class includes lands composed of rock, sand, and soil surfaces. The building class includes residential buildings, administrative buildings, and small industrial and commercial buildings. The road class includes main streets, subways and alleys. The farmland class includes annual and perennial crops and grassland found in open flat areas. Another class is a tree class that includes trees in the study area. Thereafter, classification accuracy was examined against the ground truth data and a set of training (<100 pixels) and testing (>400 pixels) samples for each land use class were collected for this purpose.
In HDCA, the gravitational constant (G) for traveling operator was considered equal to 10 with the lapse of time. The maximum number for escaping operator was set to 100, and the number of iterations for optimization to 200. The escape power element of pixel which determined escape probability was set to 3. It should be noted that the distance vectors in four main directions with the length of a pixel ((1,1) (0,1) (1,0) (−1,1)) for the Co-occurrence Matrix [62] were considered. The feature of inertia was produced for each direction and their average served as the final inertia feature. The size of both kernels was determined to be 3 × 3. To reduce the computational load, features with weight near zero in the classification were omitted.
Figure 4 shows the image classification results by the three methods. Table 1 indicates Producer’s accuracy, User’s accuracy, Overall accuracy and Cohen’s Kappa coefficient for the classification results.
Results reported in Table 1 indicate that the classification accuracy of each method for the road class is nearly identical. Also, all methods classified tree and farmland classes with high accuracy except that the accuracy of HDCA was slightly better than the other two methods. On the other hand, a significant difference in the accuracy of the building class has been observed. HDCA is approximately 2.4 and 0.69 more accurate than MLC and SVM, respectively. The HDCA yielded much higher accuracy because the application of the WMD and the proposed IGSA optimization algorithm in algorithm operators. Comparison of the bare land class accuracy obtained from classifiers revealed the weakness of MLC to distinguish the pixels that should be included in the bare land class.
Based on the results obtained, the HDCA algorithm was better than two other methods in this comparison due to high performance in all classes. According to Overall accuracy and Cohen’s Kappa coefficient, which was reported in Table 1, it is comprehensible that our proposed method has obvious excellence against MLC and SVM methods in the intended image.
As shown in Figure 3, building and wasteland had high spectral similarity; the accuracy of classification in these classes was not high with MLC and SVM. These classes are distinguishable texturally such that it is feasible to separate these classes with visual interpretation. Statistical methods such as MLC cannot use multiple sources of features with different scales and different statistical distributions effectively because of the low flexibility (especially in the building class). Also, MLC depends on the Gaussian statistical distribution of data. SVM did not show high efficiency in extracting building class either because the separator hyperplane between classes, which is supposed to be at the middle of Support Vectors, did not account for the dispersion of training data within classes in this case. The classification accuracy of HDCA was slightly higher than two other methods in bare land class, but was substantially higher in the building class. This difference in accuracy was attributable to the impact of the Weighted Manhattan Distance (WMD) and IGSA optimization in algorithm operators. The differences between HDCA and SVM classification were illustrated in Figure 5.
As illustrated in Figure 5, most of the differences between two classifier are related to pixels suspected to be the building class. This is indicative of the different procedure of HDCA in the face of extracted texture features of the image compared with SVM. Visual assessment of HDCA results demonstrates the efficiency of this method in the building extraction.
The appropriate integration of the road network and relatively high accuracy of road extraction can be presumed by visual interpretation of the obtained image from the proposed method. It is also noteworthy that two types of vegetation cover were separated from each other with acceptable accuracy. In sum, the HDCA advantages in high spatial resolution image classification (i.e., IKONOS) are listed as follows.
  • The algorithm well preserved the road network.
  • The different vegetation covers were separated from each other effectively.
  • It distinguished between the two classes of bare land and building properly.

3.2. Hyperspectral Images

To evaluate and validate the performance of the HDCA algorithm in hyperspectral image classification, hyperspectral reference datasets including the Indian Pines dataset, Salinas, and Salinas-A dataset, which were acquired by the Airborne Visible/ Infrared Imaging Spectrometer (AVIRIS) sensor, were used. To investigate the influence of the size of training set on the classification methods, the classification for each of the reference datasets was performed with different training sample sizes (1, 5, 10 and 12.5%), and four different schemes were considered. In scheme (a) 1% of the training samples of each class were used. In other schemes: (b) 5%, (c) 10%, (d) 12.5% of the training samples of each class were used to classify the reference datasets. HDCA samples were randomly selected as training pixels and the remaining pixels were used as the test set.

3.2.1. Indian Pines Dataset

The Indian Pines hyperspectral dataset consisted of a 2.9 × 2.9 km2 (145 × 145 pixels) land of Northwest Tippecanoe County in the Indian Pine region, and was collected on 12 June 1992. Two-thirds of this land were covered by mixed agriculture lands and one-third was forest or other wildland. This dataset included 220 bands with a spectral resolution of 10 nm and a spatial resolution of 20 m. Twenty spectral bands were omitted because of noise and water absorption. The remaining bands were used in the experiment. The color composite image of this dataset was illustrated in Figure 6.
The original ground-truth image had 16 various classes. Table 2 indicates the type and the number of labeled pixels for each class. Two challenging issues of this dataset are that all classes contained mixed pixels and the number of labeled pixels per class was unequal [42,43].
The results of the classification of Indian Pines dataset using the HDCA method for different training sample sizes were shown in Figure 7.
The classification results for each of the schemes were examined with a set of test data. The results of the user’s and producer’s accuracy for the scheme (c) were shown in Table 3. Also, the overall accuracy and Cohen’s Kappa coefficient for each of the four schemes were presented in Table 3.

3.2.2. Salinas Dataset

The Salinas dataset contained 224 spectral bands with a spatial resolution of 3.7 m and was acquired by the AVIRIS sensor over Salinas Valley, California, USA. The size of the image is 512 × 217 pixels. The water absorption bands were removed and the 204 remaining spectral bands were used in the experiment. The color composite of this image was shown in Figure 8.
These data included 16 various classes with high similarity of spectral signatures [63]. These dataset has been used in many studies as reference data to evaluate classification methods. Table 4 elucidates the type and the number of labeled pixels of each class.
The results of the classification of the Salinas dataset using the HDCA method for different training sample sizes were shown in Figure 9.
The classification results for each of the schemes were examined with a set of test data. The results of the user’s and producer’s accuracy for the scheme (c) were illustrated in Table 5. The overall accuracy and Cohen’s Kappa coefficient for each of the four schemes were presented in Table 5.

3.2.3. Salinas-A Scene Dataset

The Salinas-A scene dataset is a subset of the Salinas dataset which includes 224 spectral bands. The size of image is 86 × 83 pixels. Due to water absorption, 20 spectral bands were removed. The Salinas-A scene was located within the Salinas scene at (samples: 591–676, lines: 158–240) [64]. The color composite of this image was shown in Figure 10.
The Salinas-A scene includes 6 classes, most of which belong to a specific plant species (Romaine lettuce) and their only difference is the growth period. For this characteristic, the Salinas-A scene has been used in many studies as reference data to evaluate classification methods. Table 6 demonstrates the type and the number of labeled pixels of each class.
The results of the classification of Salinas-A scene dataset using the HDCA method for different training sample sizes were shown in Figure 11.
The classification results for each of the schemes were examined with a set of test data. The results of the User’s and Producer’s accuracy for the scheme where 10% of the training sample size of each class were used as training sample were shown in Table 7. Also, the overall accuracy and Cohen’s Kappa coefficient for each of the four schemes were presented in Table 7.
The accuracy assessment of the HDCA for the Indian Pines, Salinas, and Salinas-A datasets was compared with other traditional and deep learning-based classifiers in Table 8. The overall accuracy of the reference dataset classification for various algorithms indicates that HCDA results were better than results of other algorithms such as k-NN, SVM, MLP, CNN and PPFs.
To analyze the sensitivity of the results to the number of training samples, the overall accuracies of classification of the three Indian Pines, Salinas, and Salinas-A datasets, by using different classifiers related to 1% to 50% (1, 5, 10, 12.5, 15, 20, 25, 30, 35, 40, 45, and 50 percent) of the number of pixels in each class, were randomly selected for training samples and the rest of the pixels were selected as test samples.
Figure 12 compares the variation of overall accuracy over increasing the percentage of training based on Indian Pines, Salinas, and Salinas-A datasets for various methods. The samples were randomly selected as training pixels and the remaining pixels were used as the test set.
Figure 12 indicates that by increasing the number of training samples, classification accuracy increases with all algorithms. In all cases, the overall accuracy using the HDCA algorithm was higher than the overall accuracy with other classifiers. For training samples above 10%, the overall accuracy of all three Indian Pines, Salinas, and Salinas-A datasets is above 97%. By increasing the number of training samples, the overall accuracy for the HDCA algorithm was more stable than the overall accuracies obtained by other classifiers. Generally, HDCA results were closer to the results of CNN and PPF classifiers. The overall accuracies of HDCA, CNN and PPF for the Indian Pines dataset with the number of training samples more than 35% were equal. Also, the overall accuracies of HDCA, CNN and PPF for Salinas and Salinas-A datasets with the number of training samples more than 20% were also equal. Among the various classifiers, the overall accuracy of k-NN and MLP was lower than other classifiers. Furthermore, the sensitivity of classification accuracy to the number of training samples in these two classifiers was higher than that of other classifiers.
Lastly, a comparative assessment has been provided. The proposed algorithm was compared with some recent classification methods. The results of the hyperspectral reference dataset classification using the HCDA algorithm were compared with the results of other algorithms in previous studies [13,42,43,44,45,46,47,48,49,50,51,52]. The overall accuracy of the reference dataset classification for various algorithms indicates that HCDA results were better than results of other algorithms. The results of other methods have been derived from the relevant articles. Therefore, some results are incomplete. The comparison results were illustrated in Table 9.

4. Discussion

The proposed algorithm has a key goal of optimizing the feature space. Due to the complex nature of the objective function and huge size of some images as input data, it is vital to use an efficient algorithm to optimize the objective function. The IGSA optimization algorithm, due to having memory and negative mass usage, is caught in local optimum with the least possibility rather than other optimization algorithms, which is the main characteristic of using this algorithm. This algorithm is able to determine the feature space scale, especially in images with high spectral dimension. Also, the intended objective function depicts the separation between classes ideally. Because of the use of the powerful optimization algorithm and the objective function, we can easily classify different resolution images without the need to know the nature of the image and its computational complexity.
The other vital issue to be discussed is training data and the parameters of HDCA classifier. The proposed algorithm is not more strongly affected by the number of training samples and has no need to determine the parameters for classification. This method has a stable model and lower dependency on the spatial dimension of the input data because of normalization of the feature space.
In the present study, the characteristics of spectral component values, variance and inertia were used to implement the HDCA. To classify images with more heterogeneity, more types of features should be used. The results of the study show that HDCA has high capability for classification of images with different spatial and spectral resolutions.
Moreover, the computational time of the proposed method has been computed. In this research a standard notebook (Intel Core i7, 2.40 GHz and 16 GB of RAM) was used for different datasets (for each dataset the average of computational time of 10 repetitions has been computed). The average of the computational time of the proposed method for IKONOS image, Salinas-A dataset, Salinas dataset and Indian Pines dataset is 3.63, 5.72, 11.31 and 7.15 minutes, respectively (using 10% of the available training sample for hyperspectral datasets). In sum, the proposed method has a moderate computational complexity. The complexity of HDCA is lower than that of deep CNN algorithms. Also, the time to image classification using HDCA is lower than deep CNN algorithms.

5. Conclusions

In this paper, a novel classification method, HDCA, was proposed, and tested with different types of remote sensing images, i.e., high spatial resolution and high spectral resolution. The HDCA used texture and spectral information for classifying images in two iterative, supplemental computing stages. It was demonstrated as a method of effective image classification as well as an algorithm for optimizing the scale of feature space at the pre-processing stage. This optimization makes it possible to identify more homogeneous classes. Furthermore, the proposed algorithm with heuristic search has the ability of finding the best-suited pixels for a class. The experimental results indicate that HDCA possessed high capability to separate classes with different spatial and spectral resolutions. Future works were warranted to test this method with voluminous data and images with more extracted features. The designed optimization algorithm and the method of determining feature space can be used for applications, independent of the proposed classification method. Additionally, future studies can investigate the concept of fuzzification with this algorithm in order to achieve considerable classification accuracy in the areas of abundant mixed pixels.

Author Contributions

M.K.F., I.D. and A.S. conceived and designed the research for the first draft; M.K.F., I.D. and A.S. performed data analysis and wrote the first draft; S.K.A. edited the pre-draft; Q.W. re-designed the research, revised and edited the paper; all authors contributed to and approved the final manuscript.

Funding

This research received no external funding.

Acknowledgments

We are grateful to three anonymous reviewers and Ali Darvishi Boloorani (Department of Remote Sensing & GIS, University of Tehran) and Hossein Nezamabadi-Pour (Department of Electrical Engineering, Shahid Bahonar University of Kerman) for their valuable comments and suggestions to improve this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Consider two training datasets (A and B) with the center of the clusters shown in Figure A1. If we employ any of the other distance metrics (Euclidean distance, 2-norm distance and …) rather than WMD, the unlabeled pixel C (Feature 1_C = 0.4 and Feature 1_C = 0.2) will be assigned to the cluster B. However, it is clear from the dispersion of training pixels that the unlabeled pixel should belong to cluster A. For this reason, the dispersion of training pixels in the standard deviation values of these pixels is considered in WMD.
Figure A1. Training datasets A and B and unlabeled pixel C.
Figure A1. Training datasets A and B and unlabeled pixel C.
Remotesensing 11 00546 g0a1
Consider 30 samples from A and B datasets shown in Figure A1 with the values of feature 1 and 2 shown in Table A1.
Table A1. Values of feature 1 and 2 for 30 samples from A and B datasets.
Table A1. Values of feature 1 and 2 for 30 samples from A and B datasets.
PointFeature 1_AFeature 2_AFeature 1_BFeature 2_B
10.10.050.4910.206
20.120.050.5020.216
30.150.220.5210.196
40.20.150.4840.184
50.250.060.4930.202
60.230.370.5090.209
70.260.150.5010.201
80.30.060.5070.207
90.350.140.5090.204
100.380.130.5080.189
110.250.290.5110.203
120.060.20.5020.202
130.050.230.5130.215
140.180.160.5140.186
150.20.330.5150.208
160.130.350.5160.206
170.160.30.5070.195
180.190.360.4990.201
190.310.10.4850.218
200.320.140.4870.187
210.350.130.4860.196
220.260.210.4890.191
230.320.240.4850.185
240.290.210.4880.218
250.260.270.5050.192
260.080.380.4890.209
270.050.210.4910.191
280.040.020.4860.186
290.030.20.50.211
300.140.30.5130.193
According to Figure A1 and Table A1, the statistical parameters (mean and standard deviation (SD)) of the samples of A and B datasets were calculated and results are shown in the Table A2.
Table A2. The statistical parameters (mean and standard deviation (SD)) of the samples of A and B datasets.
Table A2. The statistical parameters (mean and standard deviation (SD)) of the samples of A and B datasets.
Statistical ParametersFeature 1_AFeature 2_AFeature 1_BFeature 2_B
Mean0.20.20.50.2
SD0.10440.10380.01140.0103
Based on Table A2 and feature values of unlabeled pixel C, the Euclidean distance (A1 to A4) and WMD (A5 to A8) were calculated as follows:
  • Euclidean distance
    Euclidean   distance   A C = ( 0.2 0.4 ) 2 + ( 0.2 0.2 ) 2 = 0.2
    Euclidean   distance   B C = ( 0.5 0.4 ) 2 + ( 0.2 0.2 ) 2 = 0.1
    Euclidean   distance   A C > Euclidean   distance   B C
    C B
  • WMD
    WMD   A C = ( 0.4 0.2 0.1044 ) + ( 0.2 0.2 0.1038 ) = 1.915
    WMD   B C = ( 0.5 0.4 0.0114 ) + ( 0.2 0.2 0.0103 ) = 8.771
    WMD   A C < WMD   B C
    C A

References

  1. Schowengerdt, R.A. Techniques for Image Processing and Classifications in Remote Sensing; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  2. Bolstad, P.; Lillesand, T. Rapid maximum likelihood classification. Photogramm. Eng. Remote Sens. 1991, 57, 67–74. [Google Scholar]
  3. Bishop, C.; Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  4. Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef] [Green Version]
  5. Freund, Y.; Mason, L. The alternating decision tree learning algorithm. In icml; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1999; pp. 124–133. [Google Scholar]
  6. Bryll, R.; Gutierrez-Osuna, R.; Quek, F. Attribute bagging: Improving accuracy of classifier ensembles by using random feature subsets. Pattern Recognit. 2003, 36, 1291–1302. [Google Scholar] [CrossRef]
  7. Weinberger, K.Q.; Blitzer, J.; Saul, L.K. Distance metric learning for large margin nearest neighbor classification. In Advances in Neural Information Processing Systems; Neural Information Processing Systems Foundation, Inc.: Vancouver, BC, Canada, 2006; pp. 1473–1480. [Google Scholar]
  8. Manning, C.D.; Raghavan, P.; Schutze, H. Introduction to Information Retrieval; Cambridge University Press: Cambridge, UK, 2008; Chapter 20; pp. 405–416. [Google Scholar]
  9. Makantasis, K.; Doulamis, A.D.; Doulamis, N.D.; Nikitakis, A. Tensor-based classification models for hyperspectral data analysis. IEEE Trans. Geosci. Remote Sens. 2018, 1–15. [Google Scholar] [CrossRef]
  10. Kotsia, I.; Guo, W.; Patras, I. Higher rank support tensor machines for visual recognition. Pattern Recognit. 2012, 45, 4192–4203. [Google Scholar] [CrossRef]
  11. Rashedi, E.; Nezamabadi-Pour, H. A stochastic gravitational approach to feature based color image segmentation. Eng. Appl. Artif. Intell. 2013, 26, 1322–1332. [Google Scholar] [CrossRef]
  12. Neumann, J.; Schnörr, C.; Steidl, G. Combined svm-based feature selection and classification. Mach. Learn. 2005, 61, 129–150. [Google Scholar] [CrossRef]
  13. Xia, J. Multiple Classifier Systems for the Classification of Hyperspectral Data. Ph.D. Thesis, Université de Grenoble, Grenoble, France, 2014. [Google Scholar]
  14. Moghaddam, M.H.R.; Sedighi, A.; Fasihi, S.; Firozjaei, M.K. Effect of environmental policies in combating aeolian desertification over sejzy plain of iran. Aeolian Res. 2018, 35, 19–28. [Google Scholar] [CrossRef]
  15. Hatamlou, A.; Abdullah, S.; Nezamabadi-Pour, H. A combined approach for clustering based on k-means and gravitational search algorithms. Swarm Evol. Comput. 2012, 6, 47–52. [Google Scholar] [CrossRef]
  16. Rezaei, M.; Nezamabadi-Pour, H. Using gravitational search algorithm in prototype generation for nearest neighbor classification. Neurocomputing 2015, 157, 256–263. [Google Scholar] [CrossRef]
  17. Li, S.; Wu, H.; Wan, D.; Zhu, J. An effective feature selection method for hyperspectral image classification based on genetic algorithm and support vector machine. Knowl.-Based Syst. 2011, 24, 40–48. [Google Scholar] [CrossRef]
  18. Shelokar, P.; Jayaraman, V.K.; Kulkarni, B.D. An ant colony approach for clustering. Anal. Chim. Acta 2004, 509, 187–195. [Google Scholar] [CrossRef]
  19. Krishna, K.; Murty, M.N. Genetic k-means algorithm. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 1999, 29, 433–439. [Google Scholar] [CrossRef] [PubMed]
  20. Qinand, A.; Suganthan, P.N. Kernel neural gas algorithms with application to cluster analysis. In Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK, 23–26 August 2004; pp. 617–620. [Google Scholar]
  21. Omran, M.G.; Engelbrecht, A.P.; Salman, A. Image classification using particle swarm optimization. In Recent Advances in Simulated Evolution and Learning; World Scientific: Singapore, 2004; pp. 347–365. [Google Scholar]
  22. Luo, Y.; Zou, J.; Yao, C.; Zhao, X.; Li, T.; Bai, G. Hsi-cnn: A novel convolution neural network for hyperspectral image. In Proceedings of the 2018 International Conference on Audio, Language and Image Processing (ICALIP), Prague, Czech Republic, 9–10 July 2018; pp. 464–469. [Google Scholar]
  23. Gao, Q.; Lim, S.; Jia, X. Hyperspectral image classification using convolutional neural networks and multiple feature learning. Remote Sens. 2018, 10, 299. [Google Scholar] [CrossRef]
  24. Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
  25. Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4544–4554. [Google Scholar] [CrossRef]
  26. Yue, J.; Zhao, W.; Mao, S.; Liu, H. Spectral–spatial classification of hyperspectral images using deep convolutional neural networks. Remote Sens. Lett. 2015, 6, 468–477. [Google Scholar] [CrossRef]
  27. Aptoula, E.; Ozdemir, M.C.; Yanikoglu, B. Deep learning with attribute profiles for hyperspectral image classification. IEEE Geosci. Remote Sens. 2016, 13, 1970–1974. [Google Scholar] [CrossRef]
  28. Zhao, W.; Guo, Z.; Yue, J.; Zhang, X.; Luo, L. On combining multiscale deep learning features for the classification of hyperspectral remote sensing imagery. Int. J. Remote Sens 2015, 36, 3368–3379. [Google Scholar] [CrossRef]
  29. Schutz, B. Gravity from the Ground Up: An Introductory Guide to Gravity and General Relativity; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  30. Halliday, D.; Resnick, R.; Walker, J. Fundamentals of Physics; Wiley and Sons: New York, NY, USA, 1993. [Google Scholar]
  31. Wright, W.E. Gravitational clustering. Pattern Recognit. 1977, 9, 151–166. [Google Scholar] [CrossRef]
  32. Lai, A.H.; Yung, H. Segmentation of color images based on the gravitational clustering concept. Opt. Eng. 1998, 37, 989–1001. [Google Scholar]
  33. Kundu, S. Gravitational clustering: A new approach based on the spatial distribution of the points. Pattern Recognit. 1999, 32, 1149–1160. [Google Scholar] [CrossRef]
  34. Chen, C.-Y.; Hwang, S.-C.; Oyang, Y.-J. A statistics-based approach to control the quality of subclusters in incremental gravitational clustering. Pattern Recognit. 2005, 38, 2256–2269. [Google Scholar] [CrossRef]
  35. Long, T.; Jin, L.-W. A new simplified gravitational clustering method for multi-prototype learning based on minimum classification error training. In Advances in Machine Vision, Image Processing, and Pattern Analysis; Springer: Berlin, Germany, 2006; pp. 168–175. [Google Scholar]
  36. Han, X.; Quan, L.; Xiong, X.; Almeter, M.; Xiang, J.; Lan, Y. A novel data clustering algorithm based on modified gravitational search algorithm. Eng. Appl. Artif. Intell. 2017, 61, 1–7. [Google Scholar] [CrossRef]
  37. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. Gsa: A gravitational search algorithm. Inf. Sci. 2009, 179, 2232–2248. [Google Scholar] [CrossRef]
  38. Dowlatshahi, M.B.; Nezamabadi-pour, H. Ggsa: A grouping gravitational search algorithm for data clustering. Eng. Appl. Artif. Intell. 2014, 36, 114–121. [Google Scholar] [CrossRef]
  39. Waske, B.; van der Linden, S.; Benediktsson, J.A.; Rabe, A.; Hostert, P. Sensitivity of support vector machines to random feature selection in classification of hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2880–2889. [Google Scholar] [CrossRef]
  40. Hu, W.; Huang, Y.; Wei, L.; Zhang, F.; Li, H. Deep convolutional neural networks for hyperspectral image classification. J. Sens. 2015, 2015. [Google Scholar] [CrossRef]
  41. Li, W.; Wu, G.; Zhang, F.; Du, Q. Hyperspectral image classification using deep pixel-pair features. IEEE Trans. Geosci. Remote Sens. 2017, 55, 844–853. [Google Scholar] [CrossRef]
  42. Song, B.; Li, J.; Dalla Mura, M.; Li, P.; Plaza, A.; Bioucas-Dias, J.M.; Benediktsson, J.A.; Chanussot, J. Remotely sensed image classification using sparse representations of morphological attribute profiles. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5122–5136. [Google Scholar] [CrossRef]
  43. Shahdoosti, H.R.; Mirzapour, F. Spectral–spatial feature extraction using orthogonal linear discriminant analysis for classification of hyperspectral data. Eur. J. Remote Sens. 2017, 50, 111–124. [Google Scholar] [CrossRef] [Green Version]
  44. Haridas, N.; Sowmya, V.; Soman, K. Comparative analysis of scattering and random features in hyperspectral image classification. Procedia Comput. Sci. 2015, 58, 307–314. [Google Scholar] [CrossRef]
  45. Xu, Y.; Wu, Z.; Wei, Z. Spectral–spatial classification of hyperspectral image based on low-rank decomposition. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2370–2380. [Google Scholar] [CrossRef]
  46. Su, H.; Yong, B.; Du, P.; Liu, H.; Chen, C.; Liu, K. Dynamic classifier selection using spectral-spatial information for hyperspectral image classification. J. Appl. Remote Sens. 2014, 8, 085095. [Google Scholar] [CrossRef]
  47. Ran, L.; Zhang, Y.; Wei, W.; Zhang, Q. A hyperspectral image classification framework with spatial pixel pair features. Sensors 2017, 17, 2421. [Google Scholar] [CrossRef] [PubMed]
  48. Iliopoulos, A.-S.; Liu, T.; Sun, X. Hyperspectral image classification and clutter detection via multiple structural embeddings and dimension reductions. arXiv, 2015; arXiv:1506.01115. [Google Scholar]
  49. Dópido, I.; Zortea, M.; Villa, A.; Plaza, A.; Gamba, P. Unmixing prior to supervised classification of remotely sensed hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2011, 8, 760–764. [Google Scholar] [CrossRef]
  50. Bernabé, S.; Marpu, P.R.; Plaza, A.; Dalla Mura, M.; Benediktsson, J.A. Spectral–spatial classification of multispectral images using kernel feature space representation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 288–292. [Google Scholar] [CrossRef]
  51. Camps-Valls, G.; Gomez-Chova, L.; Muñoz-Marí, J.; Vila-Francés, J.; Calpe-Maravilla, J. Composite kernels for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 2006, 3, 93–97. [Google Scholar] [CrossRef]
  52. Ramzi, P.; Samadzadegan, F.; Reinartz, P. Classification of hyperspectral data using an adaboostsvm technique applied on band clusters. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2066–2079. [Google Scholar] [CrossRef]
  53. Thanh Noi, P.; Kappas, M. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using sentinel-2 imagery. Sensors 2018, 18, 18. [Google Scholar] [CrossRef] [PubMed]
  54. Firozjaei, M.K.; Kiavarz, M.; Nematollahi, O.; Karimpour Reihan, M.; Alavipanah, S.K. An evaluation of energy balance parameters, and the relations between topographical and biophysical characteristics using the mountainous surface energy balance algorithm for land (sebal). Int. J. Remote Sens. 2019, 1–31. [Google Scholar] [CrossRef]
  55. Firozjaei, M.K.; Kiavarz, M.; Alavipanah, S.K.; Lakes, T.; Qureshi, S. Monitoring and forecasting heat island intensity through multi-temporal image analysis and cellular automata-markov chain modelling: A case of babol city, iran. Ecol. Indic. 2018, 91, 155–170. [Google Scholar] [CrossRef]
  56. Panah, S.; Mogaddam, M.K.; Firozjaei, M.K. Monitoring spatiotemporal changes of heat island in babol city due to land use changes. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42. [Google Scholar]
  57. Weng, Q.; Firozjaei, M.K.; Sedighi, A.; Kiavarz, M.; Alavipanah, S.K. Statistical analysis of surface urban heat island intensity variations: A case study of Babol city, Iran. GISci. Remote Sens. 2019, 56, 576–604. [Google Scholar] [CrossRef]
  58. Heumann, B.W. An object-based classification of mangroves using a hybrid decision tree—Support vector machine approach. Remote Sens. 2011, 3, 2440–2460. [Google Scholar] [CrossRef]
  59. Qian, Y.; Zhou, W.; Yan, J.; Li, W.; Han, L. Comparing machine learning classifiers for object-based land cover classification using very high resolution imagery. Remote Sens. 2015, 7, 153–168. [Google Scholar] [CrossRef]
  60. Karimi Firozjaei, M.; Kiavarz Mogaddam, M.; Alavi Panah, S.K. Monitoring and predicting spatial-temporal changes heat island in babol city due to urban sprawl and land use changes. J. Geospat. Inf. Technol. 2017, 5, 123–151. [Google Scholar] [CrossRef]
  61. Karimi Firuzjaei, M.; Kiavarz Moghadam, M.; Mijani, N.; Alavi Panah, S.K. Quantifying the degree-of-freedom, degree-of-sprawl and degree-of-goodness of urban growth tehran and factors affecting it using remote sensing and statistical analyzes. J. Geomat. Sci. Technol. 2018, 7, 89–107. [Google Scholar]
  62. Haralick, R.M.; Shanmugam, K. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 610–621. [Google Scholar] [CrossRef]
  63. Plaza, A.; Martinez, P.; Plaza, J.; Perez, R. Dimensionality reduction and classification of hyperspectral image data using sequences of extended morphological transformations. IEEE Trans. Geosci Remote Sens. 2005, 43, 466–479. [Google Scholar] [CrossRef] [Green Version]
  64. Haridas, N.; Sowmya, V.; Soman, K. Hyperspectral image classification using random kitchen sink and regularized least squares. In Proceedings of the 2015 International Conference on Communications and Signal Processing (ICCSP), Melmaruvathur, India, 2–4 April 2015; pp. 1665–1669. [Google Scholar]
  65. Santara, A.; Mani, K.; Hatwar, P.; Singh, A.; Garg, A.; Padia, K.; Mitra, P. Bass net: Band-adaptive spectral-spatial feature learning neural network for hyperspectral image classification. IEEE Trans. Geosci Remote Sens. 2017, 55, 5293–5301. [Google Scholar] [CrossRef]
Figure 1. The analytical procedure of homogeneity distance classification algorithm (HDCA).
Figure 1. The analytical procedure of homogeneity distance classification algorithm (HDCA).
Remotesensing 11 00546 g001
Figure 2. The optimization algorithm.
Figure 2. The optimization algorithm.
Remotesensing 11 00546 g002
Figure 3. IKONOS satellite image for the Shahriar region (Tehran), 2013, which has 4 spectral bands, Red, Green, Blue and Near Infrared with a spatial resolution of 4 meters and a size of 400 × 400 pixels.
Figure 3. IKONOS satellite image for the Shahriar region (Tehran), 2013, which has 4 spectral bands, Red, Green, Blue and Near Infrared with a spatial resolution of 4 meters and a size of 400 × 400 pixels.
Remotesensing 11 00546 g003
Figure 4. Classification results of HDCA, SVM and MLC algorithms. (a): ‘HDCA’, (b): ‘SVM’, (c): ‘MLC’.
Figure 4. Classification results of HDCA, SVM and MLC algorithms. (a): ‘HDCA’, (b): ‘SVM’, (c): ‘MLC’.
Remotesensing 11 00546 g004
Figure 5. Differences between HDCA and SVM.
Figure 5. Differences between HDCA and SVM.
Remotesensing 11 00546 g005
Figure 6. The false color composite image (15, 100 and 180 bands) of the Indian Pines hyperspectral dataset consisted of a 2.9 × 2.9 km2 (145 × 145 pixels) land of Northwest Tippecanoe County in the Indian Pine region.
Figure 6. The false color composite image (15, 100 and 180 bands) of the Indian Pines hyperspectral dataset consisted of a 2.9 × 2.9 km2 (145 × 145 pixels) land of Northwest Tippecanoe County in the Indian Pine region.
Remotesensing 11 00546 g006
Figure 7. Classification results of the HDCA for the Indian Pines dataset. Scheme (a) 1% (b) 5%, (c) 10%, (d) 12.5% of the training samples of each class were used to classify the reference datasets.
Figure 7. Classification results of the HDCA for the Indian Pines dataset. Scheme (a) 1% (b) 5%, (c) 10%, (d) 12.5% of the training samples of each class were used to classify the reference datasets.
Remotesensing 11 00546 g007
Figure 8. The false color composite image (15, 100 and 180 bands) of the Salinas dataset contained 224 spectral bands with a spatial resolution of 3.7 m and was acquired by the AVIRIS sensor over Salinas Valley, California, USA.
Figure 8. The false color composite image (15, 100 and 180 bands) of the Salinas dataset contained 224 spectral bands with a spatial resolution of 3.7 m and was acquired by the AVIRIS sensor over Salinas Valley, California, USA.
Remotesensing 11 00546 g008
Figure 9. Classification results of the HDCA for the Salinas dataset. Scheme (a) 1% (b) 5%, (c) 10%, (d) 12.5% of the training samples of each class were used to classify the reference datasets.
Figure 9. Classification results of the HDCA for the Salinas dataset. Scheme (a) 1% (b) 5%, (c) 10%, (d) 12.5% of the training samples of each class were used to classify the reference datasets.
Remotesensing 11 00546 g009
Figure 10. The false color composite image (15, 100 and 180 bands) of the Salinas-A scene dataset is a subset of the Salinas dataset that contained 224 spectral bands with a spatial resolution of 3.7 m and was acquired by the AVIRIS sensor over Salinas Valley, California, USA.
Figure 10. The false color composite image (15, 100 and 180 bands) of the Salinas-A scene dataset is a subset of the Salinas dataset that contained 224 spectral bands with a spatial resolution of 3.7 m and was acquired by the AVIRIS sensor over Salinas Valley, California, USA.
Remotesensing 11 00546 g010
Figure 11. Classification results of the HDCA for the Salinas-A scene. Scheme (a) 1% (b) 5%, (c) 10%, (d) 12.5% of the training samples of each class were used to classify the reference datasets.
Figure 11. Classification results of the HDCA for the Salinas-A scene. Scheme (a) 1% (b) 5%, (c) 10%, (d) 12.5% of the training samples of each class were used to classify the reference datasets.
Remotesensing 11 00546 g011
Figure 12. The effects of training samples on accuracies of three data sets.
Figure 12. The effects of training samples on accuracies of three data sets.
Remotesensing 11 00546 g012aRemotesensing 11 00546 g012b
Table 1. User’s and Producer’s accuracy, Overall accuracy and Cohen’s Kappa of three classifiers.
Table 1. User’s and Producer’s accuracy, Overall accuracy and Cohen’s Kappa of three classifiers.
HDCAUser’s AccuracyProducer’s Accuracy
RoadtreefarmlandbuildingBare landroadtreefarmlandbuildingbare land
Road98.56%0%0%1.44%0%90.51%0%0%1.69%0%
Tree0.56%99.22%0.22%0%0%0.5199.00%0.10%0%0%
farmland0%0.45%99.55%0%0%0%1.00%99.65%0%0%
building9.78%0%0.44%80.67%9.11%8.98%0%0.20%94.53%10.90
bare land0%0%0.14%4.14%95.71%0%0%0.05%3.78%89.10%
Overall Accuracy: 95.69%Cohen’s Kappa Coefficient: 94.35%
SVMUser’s AccuracyProducer’s Accuracy
RoadtreefarmlandbuildingBare landRoadtreefarmlandbuildingbare land
Road98.67%0%0.11%1.22%0%88.62%0%0.04%1.52%0%
Tree0.89%98.56%0.56%0%0%0.80%98.78%0.25%0%0%
farmland0%0.50%99.50%0%0%0%1.11%99.25%0%0%
building11.78%0%0.89%76.78%10.33%10.58%0%0.40%95.18%12.13%
bare land0%0.14%0.14%3.43%96.29%0%0.11%0.04%3.31%87.87%
Overall Accuracy: 95.00%Cohen’s Kappa Coefficient: 93.95%
MLCUser’s AccuracyProducer’s Accuracy
RoadtreefarmlandbuildingBare landroadtreefarmlandbuildingbare land
Road97.89%0.33%0.11%1.67%0%90.45%0.32%0.05%2.07%0%
Tree0.44%99.33%0.22%0%0%0.41%93.91%0.10%0%0%
farmland0%2.70%95.55%0.60%1.15%0%5.67%99.32%1.66%2.78%
building9.89%0%1.11%74.56%14.44%9.14%0%0.52%92.68%15.74%
bare land0%0.14%0%3.71%96.14%0%0.11%0%3.59%81.48%
Overall Accuracy: 93.15%Cohen’s Kappa Coefficient: 91.06%
Table 2. Type of classes and their respective labeled pixels number in the Indian Pines dataset.
Table 2. Type of classes and their respective labeled pixels number in the Indian Pines dataset.
NoClassPixelsNoClassPixels
1Alfalfa469Hay-windowed478
2Bldg-Grass-Tree-Drives38610Oats20
3Corn-no till142811Soybeans-no till972
4Corn-min till83012Soybeans-min till2455
5Corn23713Soybeans-clean593
6Grass/pasture48314Stone-Steel-Towers 93
7Grass/trees73015Wheat 205
8Grass/pasture-mowed2816Woods 1265
Table 3. User’s and Producer’s accuracy results (using 10% of the available training sample for the dataset), Overall accuracy and Cohen’s Kappa of the HDCA with different schemes on the Indian Pines dataset.
Table 3. User’s and Producer’s accuracy results (using 10% of the available training sample for the dataset), Overall accuracy and Cohen’s Kappa of the HDCA with different schemes on the Indian Pines dataset.
ClassUAPAClassUAPA
Alfalfa100100Oats100100
Corn-no till97.9295.25Soybeans-no till97.0396.81
Corn-min till96.6797.05Soybeans-min till96.2298.05
Corn97.71100Soybeans-clean98.397.38
Grass/pasture99.7799.77Wheat10099.46
Grass/trees100100Woods99.3099.30
Grass/pasture-mowed10092.00Bldg-Grass-Tree97.4297.98
Hay-windowed99.77100Stone-Steel-Towers10090.48
Scheme.(a) 1%(b) 5%(c) 10%(d) 12.5%
Overall Accuracy74.8994.9497.8898.09
Cohen’s Kappa0.70750.94230.97590.9789
Table 4. Type of classes and their respective labeled pixels number in the Salinas dataset.
Table 4. Type of classes and their respective labeled pixels number in the Salinas dataset.
NoClassPixelsNoClassPixels
1BBrocoli_green_weeds_120099Soil_vinyard_develop6203
2BBrocoli_green_weeds_2372610Corn_senesced_weeds3278
3Fallow197611Lettuce_romaine_4 wk1068
4Fallow_rough_plow139412Lettuce_romaine_5 wk1927
5Fallow_smooth267813Lettuce_romaine_6 wk916
6Stubble395914Lettuce_romaine_7 wk1070
7Celery357915Vinyard_untrained7268
8Grapes_untrained1127116Vinyard_vertical_trellis1807
Table 5. User’s and Producer’s accuracy results (using 10% of the available training sample for the dataset), Overall accuracy and Cohen’s Kappa of the HDCA with different schemes on the Salinas dataset.
Table 5. User’s and Producer’s accuracy results (using 10% of the available training sample for the dataset), Overall accuracy and Cohen’s Kappa of the HDCA with different schemes on the Salinas dataset.
ClassUAPAClassUAPA
BBrocoli_green_weeds_1100100Soil_vinyard_develop99.8699.98
BBrocoli_green_weeds_2100100Corn_senesced_weeds99.6699.76
Fallow100100Lettuce_romaine_4 wk10098.86
Fallow_rough_plow99.5299.76Lettuce_romaine_5 wk99.8899.83
Fallow_smooth10099.71Lettuce_romaine_6 wk100100
Stubble10099.76Lettuce_romaine_7 wk100100
Celery10099.88Vinyard_untrained98.8999.26
Grapes_untrained99.4399.30Vinyard_vertical_trellis99.82100
Scheme(a) 1%(b) 5%(c) 10%(d) 12.5%
Overall Accuracy96.0399.0897.6699.82
Cohen’s Kappa0.95620.99030.99660.9983
Table 6. Type of classes and their respective labeled pixels number in the Salinas-A scene.
Table 6. Type of classes and their respective labeled pixels number in the Salinas-A scene.
NoCategoryPixels
1BBrocoli_green_weeds_1391
2Corn_senesced_weeds1343
3Lettuce_romaine_4 wk616
4Lettuce_romaine_5 wk1525
5Lettuce_romaine_6 wk674
6Lettuce_romaine_7 wk799
Table 7. User’s and Producer’s accuracy results (using 10% of the available training sample for the dataset), Overall accuracy and Cohen’s Kappa of the HDCA with different schemes on the Salinas-A scene.
Table 7. User’s and Producer’s accuracy results (using 10% of the available training sample for the dataset), Overall accuracy and Cohen’s Kappa of the HDCA with different schemes on the Salinas-A scene.
ClassUAPA
BBrocoli_green_weeds_110099.41
Corn_senesced_weeds10099.91
Lettuce_romaine_4 wk10099.63
Lettuce_romaine_5 wk99.78100
Lettuce_romaine_6 wk99.66100
Lettuce_romaine_7 wk99.7199.71
Scheme(a) 1%(b) 5%(c) 10%(d) 12.5%
Overall accuracy96.2299.4699.8599.91
Cohen’s Kappa0.95260.99330.99810.9989
Table 8. Overall accuracies of the proposed method compared to other traditional and deep learning-based classifiers.
Table 8. Overall accuracies of the proposed method compared to other traditional and deep learning-based classifiers.
DatasetTraining Samples (%)k-NNSVMMLPCNNPPFsProposed (HDCA)
Indian Pines165.273.4468.5370.375.8974.90
574.283.6381.6486.1290.7294.90
1080.687.9785.3590.695.1197.81
12.581.389.2787.9392.7596.7598.09
Salinas168.375.9472.5774.2886.3796.03
578.186.4184.2690.6394.2699.08
1083.889.6888.6193.5296.9199.66
12.585.992.690.396.6898.1599.82
Salinas-A171.8377.3774.4276.7890.3496.03
579.6988.5984.9492.3695.2699.08
1085.7592.3690.6594.6197.9799.66
12.589.4393.9692.3397.9399.0199.82
Table 9. Overall accuracies of the proposed method compared to the results of other algorithms in previous studies.
Table 9. Overall accuracies of the proposed method compared to the results of other algorithms in previous studies.
DatasetTraining samples (%)RKS-RLS aSUnSALEMAP bSVM-CK cGURLS dOLDA eBASS Net fCNN-MFL gHIS-CNN hProposed (HDCA)
Indian Pines1--73.7-73.3---74.90
5-9591.4 94.5---94.90
1093.7996.894.989.5997.696.7797.54-97.81
12.5--95.8-97.8--99.0998.09
Salinas1--95.6-96---96.03
5--98.7-99.4---99.08
10--98.9-99.995.3398.34-99.66
12.5--99.1-99.9--98.9599.82
Salinas-A1--------96.03
5--------99.08
1098.58--98.31----99.66
12.5--------99.82
a: Taken from Haridas et al. (2015) [64]. b: Taken from Song et al. (2014) [42]. c: Taken from Camps-Valls et al. (2006) [51]. d: Taken from Haridas et al. (2015) [44]. e: Taken from Shahdoosti and Mirzapour (2017) [43]. f: Taken from Santara et al. (2017) [65]. g: Taken from Gao et al. (2018) [23]. h: Taken from Luo et al. (2018) [22].

Share and Cite

MDPI and ACS Style

Firozjaei, M.K.; Daryaei, I.; Sedighi, A.; Weng, Q.; Alavipanah, S.K. Homogeneity Distance Classification Algorithm (HDCA): A Novel Algorithm for Satellite Image Classification. Remote Sens. 2019, 11, 546. https://doi.org/10.3390/rs11050546

AMA Style

Firozjaei MK, Daryaei I, Sedighi A, Weng Q, Alavipanah SK. Homogeneity Distance Classification Algorithm (HDCA): A Novel Algorithm for Satellite Image Classification. Remote Sensing. 2019; 11(5):546. https://doi.org/10.3390/rs11050546

Chicago/Turabian Style

Firozjaei, Mohammad Karimi, Iman Daryaei, Amir Sedighi, Qihao Weng, and Seyed Kazem Alavipanah. 2019. "Homogeneity Distance Classification Algorithm (HDCA): A Novel Algorithm for Satellite Image Classification" Remote Sensing 11, no. 5: 546. https://doi.org/10.3390/rs11050546

APA Style

Firozjaei, M. K., Daryaei, I., Sedighi, A., Weng, Q., & Alavipanah, S. K. (2019). Homogeneity Distance Classification Algorithm (HDCA): A Novel Algorithm for Satellite Image Classification. Remote Sensing, 11(5), 546. https://doi.org/10.3390/rs11050546

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop