Next Article in Journal
Spatial Analysis of Linear Structures in the Exploration of Groundwater
Next Article in Special Issue
GIS-Based Evaluation of Spatial Interactions by Geographic Disproportionality of Industrial Diversity
Previous Article in Journal
Bibliometric Analysis of Global Remote Sensing Research during 2010–2015
Previous Article in Special Issue
Enhanced Map-Matching Algorithm with a Hidden Markov Model for Mobile Phone Positioning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Moving Ships in Sequences of Remote Sensing Images

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
2
Collaborative Innovation Center of Geospatial Technology, Wuhan University, Wuhan 430079, China
3
School of Resource and Environmental Sciences, Wuhan University, Wuhan 430079, China
4
Beijing Institute of Remote Sensing Information, Beijing 100020, China
*
Author to whom correspondence should be addressed.
ISPRS Int. J. Geo-Inf. 2017, 6(11), 334; https://doi.org/10.3390/ijgi6110334
Submission received: 25 August 2017 / Revised: 27 September 2017 / Accepted: 30 October 2017 / Published: 1 November 2017

Abstract

:
High-speed agile remote sensing satellites have the ability to capture multiple sequences of images. However, the frame rate is lower and the baseline between each image is much longer than normal image sequences. As a result, the edges and shadows in each image in the sequence vary considerably. Therefore, more requirements are placed on the target detection algorithm. Aiming at the characteristics of multi-view image sequences, we propose an approach to detect moving ships on the water surface. Based on marker controlled watershed segmentation, we use the extracted foreground and background images to segment moving ships, and we obtain the complete shape and texture information of the ships. The inter-frame difference algorithm is applied to extract the foreground object information, while Otsu’s algorithm is used to extract the image background. The foreground and background information is fused to solve the problem of interference with object detection caused by long imaging baseline. The experimental results show that the proposed method is effective for moving ship detection.

1. Introduction

With the development of economic globalization, sea safety and economic conflicts between countries on the seas are becoming more and more prominent. Therefore, through highly mobile remote sensing satellites, continuous observations can be carried out over a certain area within a certain range. They can provide real-time, fast and accurate access to dynamic marine information, providing for timely and effective decisions, as well as the rapid settlement of marine emergencies. Meanwhile, agile satellites have the ability of rapid maneuvering and multi-view imaging over the same place. Therefore, using multi-image data, we can observe and track ships as targets, to obtain their orientation, speed and other dynamic information. This not only provides the basis for decision-making and guidance, but also reflects an important aspect of remote sensing satellite applications [1].
However, long imaging intervals and large base-height ratios exist in multi-view sequence images which are captured by agile satellites. Therefore, the appearance of the same surface feature in different image sequences is greatly changed. Figure 1 shows the method of capturing sequence images by agile satellites. It is obvious that the displacement of building shadows, different angle of buildings, and changes in illumination, all can exist in one group of sequence image, and they strongly impact the target detection method. At present, mainstream target detection methods are divided into two classes: background difference methods and inter-frame difference methods [2]. The background difference method uses various algorithms to obtain the background image, and then subtracts the background image from the current frame to locate the moving target in the current image. There are many methods for building the background model. For example, considered the fact that the background is always observed in image sequences, the background is extracted based on partial differential equations [3,4]; using the non-parametric kernel of the Gaussian distribution to estimate the probability of tie-points, an estimation model is established with a non-parametric density kernel, which can detect objects in a slightly shaking background [5,6]; based on the mixture of the k numbers of a Gaussian distribution model, the characteristics of the background distribution are obtained [7,8,9]. However, the background model established by these algorithms is highly correlated with the background of the images, which means the background model cannot be established accurately when the scene in every image differs greatly.
The inter-frame difference method is another method of target detection that uses the difference between two or more frames to obtain the shape, position and other information about the moving object [9,10,11]. Based on the continuous difference between two or more frames to update the background information, an entire background model is obtained to extract moving objects [10]. Moreover, the moving objects in image sequences are obeyed high-order statistical distributions. Therefore, the moving objects can be subsequently obtained by a high-order statistical operator to filter the differential images in certain areas [11]. Moreover, Canny feature points can be extracted from image sequences, and the feature images are differentiated, so the moving targets are extracted by the characteristics of the feature points [12]. Although these algorithms are more adaptable to real environments than the background difference algorithms, they are influenced by the displacement of the moving target. Sometimes they cannot detect the complete target, and only part of information about the moving target can be obtained. Therefore, this paper proposes a method of combining the background difference algorithm and the inter-frame difference algorithm, to effectively solve the problem that large base-height ratios and large ground feature changes can interfere with the target detection in the multi-view image sequences. The accuracy of target detection is greatly improved, which provides technical support for the rapid discovery and tracking of key targets using agile satellites.

2. Methods

In this paper, the target detection algorithm for moving ships in multi-view image sequences is divided into three parts: foreground extraction; background extraction; target segmentation. The flow chart is shown in Figure 2. First, histogram matching is performed using three frames of a multi-view image sequence, so the gray level difference between the images is eliminated. Then, computing the difference between these three frames, we can obtain the partial information about the moving targets. Next, the differential results are filtered using a multi-construction operator and binarized, which can eliminate the interference caused by movement between frames. After that step, we generate the mask images, which provide partial information on the moving targets as foreground images, and the original images are threshold using Otsu’s method and binarized such that we can extract the background information from the image sequences. Finally, combining the foreground and background images, we use marker-based watershed segmentation to segment the multi-view image sequences, to quickly and completely detect the moving targets.

2.1. Methods of Foreground Extraction

2.1.1. Inter-Frame Difference Algorithm

The inter-frame difference algorithm can quickly obtain dynamic targets in the foreground [13]. The basic principle of the inter-frame difference is that two or more frames in an image sequence are subtracted, so that the difference images containing part of moving target area are obtained. We then use a threshold to binarize these images. Assume that three consecutive frames of multi-view images are f ( k 1 ) ( x , y ) , f ( k ) ( x , y ) and f ( k + 1 ) ( x , y ) . The first two images are processed as follows:
d ( k 1 , k ) ( x , y ) = f ( k ) ( x , y ) f ( k 1 ) ( x , y )
where d ( k 1 , k ) ( x , y ) is the result of the subtraction. Because of large changes in the appearance of ground features for the high mobility of agile satellites, the result of this single subtraction contains a large amount of noise. This interferes greatly with the target information in the difference images, so the third frame is subtracted with the result:
d ( k 1 , k , k + 1 ) ( x , y ) = f ( k + 1 ) ( x , y ) f ( k ) ( x , y ) f ( k 1 ) ( x , y )
where d ( k 1 , k , k + 1 ) ( x , y ) is the resulting image after the twice difference. Then, we transform the result d ( k 1 , k , k + 1 ) ( x , y ) into a binary image T ( i , j ) :
T ( i , j ) = { 0 d ( k 1 , k , k + 1 ) ( x , y ) T h 1 d ( k 1 , k , k + 1 ) ( x , y ) > T h
where T h is the binarization threshold. 0 and 1 represent the non-target area and target area, respectively. Because of the following filtering step, in this place we need as much information on the differential images as possible, so we just take T h as zero-value.

2.1.2. Multi-Structuring Element Morphological Filtering

Because of noise generated by the dynamic change of ground features in the binary images after computing the inter-frame difference, the binary images cannot directly provide information on the moving targets. It is necessary to filter the difference images using morphological filtering operators. Meanwhile, in order to avoid the pitfall of morphological filtering with a single structuring element [13], we perform morphological filtering using several structuring elements. Knowing that building corners, shadows and other features occur mostly in linear and angular combinations, we continuously use 0°, 45°, 90° and 135° and different dimensions of the linear structuring elements to process opening and closing operations, and finally obtain the results from using these morphological filters, resulting in enhancement of the target with background suppression. The structuring elements are shown in Figure 3. Using different scales and different structuring elements, positive and negative noise signals can be simultaneously suppressed [14]. This maximizes the filtering out of signals not related to target information. The formula of the algorithm is as follows:
r = i j ω i ( ( f b i j ) b i j )
where i and j represent the number of different structuring elements and different scales, ω represents the weight, f represents the original image, b is the structuring elements, symbol “ “ represents opening operation, and “ ” represents closing.

2.2. Methods of Background Extraction and Segmentation

2.2.1. Otsu’s Method

Otsu’s method is simple, efficient and adaptable for finding the optimal threshold between the background and the targets in the foreground [15]. The larger grayscale (the range between minimum and maximum grey value) difference between the background and the targets, the more accurate the threshold [15,16]. In the optical images, the grayscale value of the water surface is always less than the grayscale value of the target ships. Therefore, Otsu’s method can effectively segment the target ship from the background.
The Otsu’s method assumes that the image can statistically be divided into two parts: background and foreground. That is, the histogram of the image is bimodal. The main purpose of Otsu’s method is to find the threshold that minimizes the internal variance of both the background and the foreground. Define the sum of variance weight of the two classes as follows:
σ ω 2 ( t ) = ω 0 ( t ) σ 0 2 ( t ) + ω 1 ( t ) σ 1 2 ( t )
where the weight ω 0 , 1 represents the probabilities of the two classes (background and foreground) divided by the threshold t, and σ 0 , 1 2 are the variances of the two classes. The method of calculation for the probabilities of classes ω 0 , 1 is as follows:
ω 0 ( t ) = i = 0 t 1 p ( i )
ω 1 ( t ) = i = t L 1 p ( i )
where L is the grayscale level of the image, and p ( i ) is the probability of gray level value i. Otsu has shown that the smallest inter-class variance is equal to the largest outer-class variance [15], that is:
σ b 2 ( t ) = σ 2 σ ω 2 ( t ) = ω 0 ( μ 0 μ T ) 2 + ω 1 ( μ 1 μ T ) 2 = ω 0 ( t ) ω 1 ( t ) [ μ 0 ( t ) μ 1 ( t ) ] 2
where μ 0 , 1 , T ( t ) is the average of classes, the calculation is as follows:
μ 0 ( t ) = i = 0 t 1 i p ( i ) ω 0
μ 1 ( t ) = i = t L 1 i p ( i ) ω 1
μ T = i = 0 L 1 i p ( i )
It can be seen that the probability ω and the mean value μ can be calculated iteratively for the threshold t , as well as the outer-class variance σ b 2 ( t ) . When obtaining the largest outer-class variance σ b 2 ( t ) , the threshold t at that time is the optimal threshold for image segmentation.

2.2.2. Marker-Based Watershed Segmentation Algorithm

The watershed algorithm is a regional image segmentation method based on mathematical morphology, which was first proposed and applied by Beucher and Lantuejoul in the late 1970s [17,18,19,20]. The watershed algorithm based on the immersion principle [18] is one of the forms of its expression. The algorithm first simulates the topological map of geodesics, and simulates the lowest points of the geodesics at each local minimum, where we insert a small hole, and then submerge the whole topographic map in water. With the water level rising, at the lowest of the confluence of the establishment of the “dam”, and ultimately these “dams” are the final division of the border, that is, a “watershed”.
However, the traditional watershed algorithm also has several defects [19]: (1) when the noise or the texture of the image is very noticeable, it is easy to produce the effect of excessive division; (2) because of the weak response for the low contrast images, the traditional algorithm cannot obtain a better segmentation effect. Because of the remote sensing images’ high resolution and clarity, as well as a relatively complex image texture, the final segmentation results of the traditional watershed algorithm will exhibit a significant over-segmentation phenomenon.
The marker-based watershed algorithm can resolve all the defects of the traditional watershed algorithm [20,21]. The marker-based watershed algorithm uses the known segmented area as a local minimum, that is, the lowest point in the topographic map. In addition, then in the immersion, it begins from the known lowest point, and ultimately obtains the final segment boundary [20]. Because of the a priori knowledge involved in the division, this method can effectively avoid the problem of over-segmentation caused by texture or noise, greatly improving segmentation accuracy. The process of marker-based watershed algorithm is as follows:
(1)
Different markers are given different labels, and pixels of the markers are the start of the immersion.
(2)
Corresponding to the gradient magnitude of the pixels neighboring markers, we insert the neighboring pixels of markers into a queue with a priority level. The gradient magnitude of the pixels is calculated as follows:
g r a d ( i , j ) = [ f ( i , j ) f ( i + 1 , j ) ] 2 + [ f ( i , j ) f ( i , j + 1 ) ] 2 2
where g r a d ( i , j ) is the gradient of the pixel located at ( i , j ) , and f ( i , j ) is the pixel value.
(3)
The pixel with the lowest priority level is extracted from the priority queue. If the neighbors of the extracted pixel that have already been labeled all have the same label, then the pixel is labeled with their label. All non-marked neighbors that are not yet in the priority queue are put into the priority queue.
(4)
Redo step 3 until the priority queue is empty.
In this paper, the foreground information of the moving targets obtained using the inter-frame difference algorithm, as well as the background information obtained using Otsu’s method, are the regions of interest and provide a priori knowledge, that is, the markers for the watershed algorithm. We then divide the whole image using the marker-based watershed algorithm to capture the targets’ shape information we need. By considering the known regions as priori knowledge, over-segmentation can be greatly reduced in the final segmentation result.

3. Experiments

3.1. Data

In this paper, we used three continuous frames of multi-view images captured by a Chinese agile remote sensing satellite. The images’ size is 3440 × 2492, and they are panchromatic images, which contain the whole visible spectra and higher resolution than the multispectral images, as shown in Figure 4. From the figure, we can see that the number of moving ships is three, for convenient explanation, we zoomed in on the three areas containing the targets, and numbered the three targets. The movement of these ships is also evident. The speed of Ship 1 and Ship 3 is relatively slow, so there are overlapping parts of each ship in the images. The Ship 2 is relatively faster than two other ships, so there is almost no overlapping part of Ship 2 in the three frames. Meanwhile, all three ships have very clear trails. Table 1 shows the interval time and the base-height ratio between two adjacent images. We can see that because of the time that the agile satellite needs to maneuver between capturing two frames, the interval time and base-height ratio between two adjacent frames is large. As a result, there are large changes to background features. This finding also raises difficulties with post-processing.

3.2. Results of Foreground Extraction

3.2.1. Histogram Matching

To compensate for changes in the overall radiance of the images, which are caused by the angle changes between the sensor and the sun when the satellite maneuvers, we first adjust the histograms of the last two images to match with the first frame. The result is shown in Figure 5. We can intuitively see that after histogram adjustment, the brightness of second and third frame is basically the same as the first. Meanwhile, comparing the histograms between before and after adjustment, we can see that the gray level distributions after adjustment are more consistent with the gray level distribution of first frame’s histogram than before. Therefore, this method can solve the problem where difference processing in next step cannot obtain a good result because of offsets between the frames’ overall radiance.
At the same time, in order to measure the improvement between before and after histogram adjustment, we evaluate the correlation coefficient, the intersection test coefficient, the chi-square test coefficient and the Bhattacharyya Distance between two histograms. It can be seen from Table 2 that after the histogram matching, the correlation coefficient and chi-square coefficient between Frame 1 and Frame 2 and between Frame 1 and Frame 3 are obviously closer to 1, and the intersection test coefficient is obviously increased. At the same time, the Bhattacharyya Distance is also reduced. The changes of these factors show that after histogram matching, the correlation of the second frame and third frame to the first frame is stronger. The gray level distribution among these frames in Figure 6 is also improved, which lays the solid foundation for the improvement of the following differential precision.

3.2.2. Inter-Frame Difference

Through the inter-frame difference algorithm, we can quickly detect information on the moving targets. The results are shown in Figure 7. From Figure 7a, we see that because of large changes of attitude and the long imaging interval time when the agile satellite captures the multi-view sequence images, the ground features change significantly in the sequence of images. The error or noise in final result obtained by the traditional difference method is more than the information we want to extract about the moving target. Therefore, based on the traditional difference method, we combine the third frame in a second differential processing step, and the result is shown in Figure 7b. We can see that after computing the second difference, the information on the moving targets is saved, as well as a small part of error information caused by the changes of building contours, shadows or light. The result is greatly improved over the traditional method.

3.2.3. Multi-Structuring Element Morphological Filter

From the results shown above, although using the difference procedure twice removes most of the noise caused by the dynamic changes of background features, there is still a small part of the noise remaining in the results. We need a morphological filtering operator to filter the images and remove the remaining errors.
However, as shown in Figure 8a, the noise in the difference results is largely due to changes of feature edges or shadows caused by the different viewing angles of agile satellites when they capture the images. Therefore, the noise in these images has a generally linear aspect, which cannot be eliminated with a filter using a single structuring element. Figure 8b shows that after processing with a top-hat morphologic filter using a single structuring element, there still remains lots of noise. To solve this problem, we can perform morphological filtering using multiple structuring elements. The results are shown in Figure 8c. We can see that after multi-structuring element morphological filtering, the linear noise which remained in the original image is mostly eliminated. At the same time, the part of moving targets information which we want to extract is saved.

3.3. Results of Background Extraction and Segmentation

3.3.1. Otsu’s Method

Generally, in remote sensing images, the number of pixels about moving ship targets is much less than the number of pixels about the water surface, and the overall grayscale values of the water surface are smaller than the grayscale values of the ship targets. Therefore, Otsu’s method is used to extract the background of the moving ship targets, that is, the water surface. Meanwhile, in order to effectively suppress the interference of noise, we also carry out multi-structuring element morphological filtering, and binarize the results. Figure 9b shows the result of background extraction, where the highlighted positions in the figure is the background we extracted. We can see that because of the huge contrast between the water surface and the moving ship targets, the background is completely extracted, which provide a good foundation for the next segmentation step.

3.3.2. Marker-Based Watershed Segmentation

The texture of high resolution remote sensing images is very rich. From these three images, we can see that the ground features on the riverbank and the trails the ships leave behind are very clear, which will cause an over-segmentation phenomenon when using the traditional watershed algorithm, and greatly reduce the accuracy of detection for the moving ship targets [22], as shown in Figure 10a. Therefore, the marker-based watershed algorithm is used to solve this problem. First, the foreground and background images we obtained in the aforementioned steps are superimposed. Next, using the foreground and background regions as the known local minimum to rectify the original images, we carry out the marker-based watershed segmentation to obtain the final result, which is shown in Figure 10b. It can be seen that the marker-based watershed algorithm can effectively split the moving ship targets, avoid over-segmentation, and greatly improve the accuracy of segmentation.
Figure 11 show the detection results overlaid on the original image. We can see that the detection of the shapes of Ship 1 and Ship 2 is relatively complete, effectively separating the ship’s body and its trail. Because of the special shape of Ship 3, there is a “fault” of gray values between the latter half and the first half of the ship, which is rarely close to the grayscale of water surface. As a result, watershed segmentation cannot detect the trailing part of the ship, and ultimately separates it from the first half of Ship 3.
Overall, the target detection algorithm we proposed in this paper can effectively detect moving ship targets in multi-view image sequences, and the accuracy of this detection is high. The shape information of the moving ship targets is obtained more completely.

4. Conclusions

To solve the problems caused by the long baseline and a large optical parallax of agile satellites when they capture a sequence of images, we propose a moving target detection algorithm based on marker-based watershed segmentation with foreground extraction using an inter-frame difference algorithm and background extraction using Otsu’s method. Using these methods, we take advantage of the relevance and continuity of the moving ship targets in a multi-view sequence of images, and overcome the disadvantages of traditional object detection methods which are sensitive to changes in background features and have low reliability when extracting target information. Therefore, our method effectively results in the improvement of moving target detection. The method shows good results for the detection of moving ships on the water surface, and is capable of providing the information on the positions, shapes and textures of targets rapidly, which builds the foundation for the observation and tracking of ships [23], provides timely and effective guidance for decision-making on the ground, and has broad prospects in many applications.

Acknowledgments

This work was substantially supported by the Fundamental Research Funds for the Central Universities (2042017kf0042), Open Fund of Twenty First Century Areospace Technology Co., Ltd. (Grant No. 21AT-2016-02), and the National Natural Science Foundation of China (91438111). These supports are valuable.

Author Contributions

Yao Shun and Chang Xueli carried out the experiment and wrote this paper. Cheng Yufeng and Jin Shuying participated in the experiment. Zuo Deshan was responsible for the data acquisition. All authors reviewed the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, S.; Yin, D.; Dou, X. Moving target information extraction based on single satellite image. Acta Geod. Cartogr. Sin. 2015, 44, 316–322. [Google Scholar]
  2. Zhang, J.; Mao, X.; Chen, T. Survey of moving object tracking algrithom. Appl. Res. Comput. 2009, 12, 4407–4410. [Google Scholar]
  3. Kornprobst, P.; Deriche, R.; Aubert, G. Image sequence analysis via partial differential equations. J. Math. Imaging Vis. 1999, 11, 5–26. [Google Scholar] [CrossRef]
  4. Aubert, G.; Kornprobst, P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations; Springer Science & Business Media: Berlin, Germany, 2006; Volume 147. [Google Scholar]
  5. Elgammal, A.; Duraiswami, R.; Harwood, D.; Davis, L.S. Background and foreground modeling using nonparametric kernel density estimation for visual surveillance. Proc. IEEE 2002, 90, 1151–1163. [Google Scholar] [CrossRef]
  6. Elgammal, A.; Harwood, D.; Davis, L. Non-parametric model for background subtraction. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2000; pp. 751–767. [Google Scholar]
  7. Stauffer, C.; Grimson, W.E.L. Adaptive background mixture models for real-time tracking. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Fort Collins, CO, USA, 23–25 June 1999; IEEE: New York, NY, USA, 1999; pp. 246–252. [Google Scholar]
  8. Stauffer, C.; Grimson, W.E.L. Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 747–757. [Google Scholar] [CrossRef]
  9. Grimson, W.E.L.; Stauffer, C.; Romano, R.; Lee, L. Using adaptive tracking to classify and monitor activities in a site. In Proceedings of the 1998 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Santa Barbara, CA, USA, 25 June 1998; IEEE: New York, NY, USA, 1998; pp. 22–29. [Google Scholar]
  10. Migliore, D.A.; Matteucci, M.; Naccari, M. A revaluation of frame difference in fast and robust motion detection. In Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, Santa Barbara, CA, USA, 27 October 2006; ACM: New York, NY, USA, 2006; pp. 215–218. [Google Scholar]
  11. Neri, A.; Colonnese, S.; Russo, G.; Talone, P. Automatic moving object and background separation. Signal Process. 1998, 66, 219–232. [Google Scholar] [CrossRef]
  12. Zhan, C.; Duan, X.; Xu, S.; Song, Z.; Luo, M. An improved moving object detection algorithm based on frame difference and edge detection. In Proceedings of the Fourth International Conference on Image and Graphics (ICIG), Sichuan, China, 22–24 August 2007; IEEE: New York, NY, USA, 2007; pp. 519–523. [Google Scholar]
  13. Ma, W.; Zhao, Y.; Zhang, G.; Jie, F.; Pan, Q.; Li, G.; Liu, Y. Infrared dim target detection based on multi-structural element morphological filter combined with adaptive threshold segmentation. Acta Photonica Sin. 2011, 7, 1020–1024. [Google Scholar]
  14. Aragón-Calvo, M.A.; Jones, B.J.T.; Van De Weygaert, R.; Van Der Hulst, J.M. The multiscale morphology filter: Identifying and extracting spatial patterns in the galaxy distribution. Astron. Astrophys. 2007, 474, 315–338. [Google Scholar] [CrossRef]
  15. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
  16. Fan, J.; Zhao, F. Two-dimensional Otsu’s curve thresholding segmentation method for gray-Level images. Acta Electron. Sin. 2007, 35, 751–755. [Google Scholar]
  17. Vincent, L.; Soille, P. Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 583–598. [Google Scholar] [CrossRef]
  18. Lotufo, R.; Silva, W. Minimal set of markers for the watershed transform. In Proceedings of the ISMM, Berlin, German, 20–21 June 2002; ACM: New York, NY, USA, 2002; pp. 359–368. [Google Scholar]
  19. Strahler, A.N. Quantitative analysis of watershed geomorphology. Eos. Trans. Am. Geophys. Union 1957, 38, 913–920. [Google Scholar] [CrossRef]
  20. Xian, G.; Crane, M. Assessments of urban growth in the Tampa Bay watershed using remote sensing data. Remote Sens. Environ. 2005, 97, 203–215. [Google Scholar] [CrossRef]
  21. Fu, Q.; Celenk, M. Marked watershed and image morphology based motion detection and performance analysis. In Proceedings of the 2013 8th International Symposium on Image and Signal Processing and Analysis (ISPA), Trieste, Italy, 4–6 September 2013; IEEE: New York, NY, USA, 2013; pp. 159–164. [Google Scholar]
  22. Wei, J.; Li, P.; Yang, J.; Zhang, J. Removing the Effects of Azimuth Ambiguities on Ships Detection Based on Polarimetic SAR Data. Acta Geod. Cartogr. Sin. 2013, 42, 530–539. [Google Scholar]
  23. Wang, W.; Li, Q. A vehicle tracking algorithm with monte-carlo method. Acta Geod. Cartogr. Sin. 2011, 2, 200–203. [Google Scholar]
Figure 1. The method of capturing image sequence by agile satellites.
Figure 1. The method of capturing image sequence by agile satellites.
Ijgi 06 00334 g001
Figure 2. The algorithm flow.
Figure 2. The algorithm flow.
Ijgi 06 00334 g002
Figure 3. 0°, 45°, 90° and 135° structuring elements.
Figure 3. 0°, 45°, 90° and 135° structuring elements.
Ijgi 06 00334 g003
Figure 4. Original multi-view sequence images.
Figure 4. Original multi-view sequence images.
Ijgi 06 00334 g004
Figure 5. The experimental results of histogram matching: (a) Sequence images of before histogram matching; (b) Sequence images of after histogram matching.
Figure 5. The experimental results of histogram matching: (a) Sequence images of before histogram matching; (b) Sequence images of after histogram matching.
Ijgi 06 00334 g005
Figure 6. Histograms before and after histogram matching: (a) Before histogram matching; (b) After histogram matching.
Figure 6. Histograms before and after histogram matching: (a) Before histogram matching; (b) After histogram matching.
Ijgi 06 00334 g006
Figure 7. Inter-frame difference results: (a) the first difference; (b) the second difference.
Figure 7. Inter-frame difference results: (a) the first difference; (b) the second difference.
Ijgi 06 00334 g007
Figure 8. Comparison between top-hat filtering and multi-structuring element filtering: (a) Original Data; (b) Top-hat filtering result; (c) Multi-structural filtering result.
Figure 8. Comparison between top-hat filtering and multi-structuring element filtering: (a) Original Data; (b) Top-hat filtering result; (c) Multi-structural filtering result.
Ijgi 06 00334 g008aIjgi 06 00334 g008b
Figure 9. The result of background extraction: (a) Original Data; (b) Otsu’s method result.
Figure 9. The result of background extraction: (a) Original Data; (b) Otsu’s method result.
Ijgi 06 00334 g009
Figure 10. The comparison between traditional and marker-based watershed segmentation: (a) Traditional watershed result; (b) Marker-based watershed result.
Figure 10. The comparison between traditional and marker-based watershed segmentation: (a) Traditional watershed result; (b) Marker-based watershed result.
Ijgi 06 00334 g010
Figure 11. The final results: (a) Contour line of Ship 1; (b) Contour line of Ship 2; (c) Contour line of Ship 3.
Figure 11. The final results: (a) Contour line of Ship 1; (b) Contour line of Ship 2; (c) Contour line of Ship 3.
Ijgi 06 00334 g011
Table 1. Interval time and base-height ratio of sequence images.
Table 1. Interval time and base-height ratio of sequence images.
Frame1–22–3
Interval Time22 s23 s
Base-height Ratio0.026430.02655
Table 2. Correlation test before and after histogram matching.
Table 2. Correlation test before and after histogram matching.
Frame1–21–3
BeforeAfterBeforeAfter
Correlation0.43300.81300.12810.7050
Chi-Square256,590.267329.425479,456.741864.9714
Intersection77.0942106.233648.888967.6905
Bhattacharyya Distance0.40090.31180.57040.4224

Share and Cite

MDPI and ACS Style

Yao, S.; Chang, X.; Cheng, Y.; Jin, S.; Zuo, D. Detection of Moving Ships in Sequences of Remote Sensing Images. ISPRS Int. J. Geo-Inf. 2017, 6, 334. https://doi.org/10.3390/ijgi6110334

AMA Style

Yao S, Chang X, Cheng Y, Jin S, Zuo D. Detection of Moving Ships in Sequences of Remote Sensing Images. ISPRS International Journal of Geo-Information. 2017; 6(11):334. https://doi.org/10.3390/ijgi6110334

Chicago/Turabian Style

Yao, Shun, Xueli Chang, Yufeng Cheng, Shuying Jin, and Deshan Zuo. 2017. "Detection of Moving Ships in Sequences of Remote Sensing Images" ISPRS International Journal of Geo-Information 6, no. 11: 334. https://doi.org/10.3390/ijgi6110334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop