Next Article in Journal
On the Accuracy of Turbulence Model Simulations of the Exhaust Manifold
Previous Article in Journal
Determination of Accuracy and Usability of a SLAM Scanner GeoSLAM Zeb Horizon: A Bridge Structure Case Study
Previous Article in Special Issue
Advancing Biological Research: New Automated Analysis of Immunofluorescence Signals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

HDetect-VS: Tiny Human Object Enhancement and Detection Based on Visual Saliency for Maritime Search and Rescue

Systems Engineering Institute, Academy of Military Sciences, PLA, Beijing 100166, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2024, 14(12), 5260; https://doi.org/10.3390/app14125260
Submission received: 28 April 2024 / Revised: 12 June 2024 / Accepted: 13 June 2024 / Published: 18 June 2024
(This article belongs to the Special Issue Advanced Image Analysis and Processing Technologies and Applications)

Abstract

:
Strong sun glint noise is an inevitable obstruction for tiny human object detection in maritime search and rescue (SAR) tasks, which can significantly deteriorate the performance of local contrast method (LCM)-based algorithms and cause high false alarm rates. For SAR tasks in noisy environments, it is more important to find tiny objects than localize them. Hence, considering background clutter and strong glint noise, in this study, a noise suppression methodology for maritime scenarios (HDetect-VS) is established to achieve tiny human object enhancement and detection based on visual saliency. To this end, the pixel intensity value distributions, color characteristics, and spatial distributions are thoroughly analyzed to separate objects from background and glint noise. Using unmanned aerial vehicles (UAVs), visible images with rich details, rather than infrared images, are applied to detect tiny objects in noisy environments. In this study, a grayscale model mapped from the HSV model (HSV-gray) is used to suppress glint noise based on color characteristic analysis, and large-scale Gaussian Convolution is utilized to obtain the pixel intensity surface and suppress background noise based on pixel intensity value distributions. Moreover, based on a thorough analysis of the spatial distribution of objects and noise, two-step clustering is employed to separate objects from noise in a salient point map. Experiments are conducted on the SeaDronesSee dataset; the results illustrate that HDetect-VS has more robust and effective performance in tiny object detection in noisy environments than other pixel-level algorithms. In particular, the performance of existing deep learning-based object detection algorithms can be significantly improved by taking the results of HDetect-VS as input.

1. Introduction

Currently, unmanned aerial vehicles (UAVs) are employed in object detection for maritime search and rescue (SAR) tasks. Due to the large field of view of UAVs, the observed objects are too small for high-resolution images. In addition, the common phenomenon of sun glint on the sea surface severely affects the visual saliency of tiny objects. In computer vision, the detection of tiny objects is an inherently difficult task for conventional and deep learning-based detectors. Furthermore, glint noise can degrade the quality of captured images and significantly weaken the features of objects when the latter are concealed by sun glint, further complicating tiny object detection. All the above issues make tiny object enhancement and detection based on visual saliency a meaningful task for maritime SAR in noisy environments.
Studies on object detection for maritime SAR mainly focus on radar signals [1], infrared images [2], and visible images [3]. The radar devices employed in maritime environmental perception are shore-based and airborne [4]. The objects detected using these devices based on the methods of signal processing, deep learning [5], and graph convolutional networks [6] mainly include vessels, floating ice, aircraft, and boats. The carriers employed for infrared imaging are UAVs [7] and vessels [8]. The objects detected in infrared images via methods of intelligent algorithms [9] and signal processing [10,11] are mainly low-speed vessels [7,9]. For the detection of tiny human objects easily concealed by noise signals, the use of visible images with rich colors, concrete shapes, and texture details is more effective than that of radar and infrared images.
Sea surface sun glint shows strong reflection characteristics, which can significantly influence UAV imaging quality, making this phenomenon one of the causes of strong noise in tiny human object detection.
A large number of infrared-based methods for maritime object detection based on visual saliency have been proposed. In single-frame detection methods, which have been widely applied because of the need for less prior knowledge compared with sequential detection methods [12], the single frame can be further divided by using background-characteristic-based [13,14], object-characteristic-based [15,16], and background–object-characteristic-combination-based [17,18] methods. Background-characteristic-based methods are based on the estimation of the difference between the original image and the background. Object-characteristic-based methods are based on a saliency map constructed by using the local information of the objects [15,16] and can be used to efficiently detect tiny objects based on infrared images. However, these methods are very sensitive to complex noise and show poor performance in tiny object detection in environments with complex and noisy maritime backgrounds, as high false alarm rates are caused by such background conditions.
In tiny human object detection with strong noise, background noise suppression and object enhancement are crucial for methods based on visual saliency. The pixel intensity value in grayscale and infrared images can reflect the saliency of the object. However, strong noise generally has a similar pixel intensity value to that of tiny objects, which can cause a high false alarm rate. Tiny objects and noise in visible images may have similar saliency to those in grayscale images, but they usually have different color characteristics in the RGB channels, which can be used for tiny object enhancement in maritime environments with strong glint noise.
In deep learning-based algorithms, performance is significantly worse when tiny objects are concealed by sun glint. Since noise can be introduced during image restoration, care should be taken when studying image denoising and reflection removal in images with tiny objects. In addition, strong glint noise can make a tiny object not noticeable when employing deep learning-based methods using feature learning [19,20].
With the development of computer vision in recent years, UAVs’ capability to automatically detect and recognize human attributes has become more relevant in the context of 24/24 h surveillance [21,22,23,24,25,26,27]. SeaDronesSee [27] is a large-scale dataset with high-resolution visible images captured by UAVs in which human targets are generally concealed by sun glint. This dataset has prompted research on tiny target detection for maritime SAR based on UAVs [3,28,29,30,31]. On this basis, we developed a method for sea surface sun glint suppression to achieve the enhancement and detection of tiny targets based on visual saliency.
In summary, the existing methods for maritime human detection based on visual saliency present the following issues:
  • When tiny human targets are concealed by sea surface sun glint, saliency-based methods have high false alarm rates, and deep learning-based methods have low recall.
  • When solely based on infrared and grayscale images, the saliency values of strong glint noise and targets are too similar to achieve noise suppression and object enhancement.
  • Object detection based on visual saliency is generally carried out on single-channel images, while visible images with tiny objects in maritime scenarios are seldom used for denoising and tiny object enhancement. Further, color characteristics are not studied effectively in maritime human target detection using saliency-based methods.
To solve the above-mentioned problems, we propose a robust method to achieve maritime tiny human object detection in sun glint-containing images based on visual saliency (HDetect-VS), as shown in Figure 1. The contributions of this study are summarized as follows:
  • Tiny object enhancement and detection under sea surface sun glint conditions is carried out based on visible images rather than single-channel images (infrared and grayscale images). The color characteristic differences between noise and objects are utilized to suppress strong glint noise and enhance object features.
  • HSV-gray, a method based on color characteristic difference, is used to suppress strong sea surface sun glint. Large-scale Gaussian Convolution is employed to suppress background noise based on the characteristics of the pixel intensity value. Two-step clustering is employed to achieve tiny object detection and localization based on the spatial distribution characteristics of object and noise saliency.
  • A simple and effective method based on visual saliency is used to perform maritime tiny target detection. Our experiment results on SeaDronesSee demonstrate that the proposed method is robust in noisy maritime environments with respect to detection accuracy. By taking the HDetect-VS results as prior knowledge, existing deep learning-based methods can achieve a significant improvement in detection performance.
Figure 1. Tiny human target detection method for maritime SAR based on visual saliency (HDetect-VS).
Figure 1. Tiny human target detection method for maritime SAR based on visual saliency (HDetect-VS).
Applsci 14 05260 g001

2. Related Works

2.1. Tiny Object Detection in Maritime Scenarios

The significant difference in pixel intensity values between tiny objects and background is crucial in object detection based on visual saliency. The method based on pixel intensity difference is generally applied to single-channel infrared images and rarely to grayscale visible images. Compared with maritime scenarios, the background in field scenarios is relatively complicated, which leads to a low signal-to-noise ratio around tiny objects, which in turn makes it difficult to distinguish objects from strong noise and to use the saliency-based method to obtain excellent performance. In maritime UAV-captured images, sun glint, waves, and haze can diminish the saliency of objects, while flat background and monochromatic characteristics inherently enhance this characteristic in tiny objects with distinctive colors. Thus, under favorable conditions, maritime tiny human object detection can be performed based on visual saliency and the analysis of background and object characteristics.
Generally, research on object detection is based on the local contrast measurement mechanism (LCM) [32] and its derivative methods [16,33,34,35]. The local differences in pixel intensity values between objects and the background in grayscale imagery are analyzed to highlight the saliency of objects; then, object localization can be performed by setting a threshold. The methods based on visual saliency are generally applied in infrared images with flat backgrounds. However, it is difficult to limit the high probability of false alarms caused by sun glint [36,37]. Multi-scale average absolute gray difference (MS-AAGD) and Laplacian of point spread function (LoPSF) are utilized to suppress false alarm sources [38]. However, when object signals are concealed by strong noise, it is more difficult to detect them based on pixel intensity difference. Therefore, in maritime scenarios, it is much more difficult to localize tiny human targets in noisy infrared images, while target detection can be achieved in visible images with rich details.
Deep learning-based methods are well suited for real-time object detection, with excellent performance in generic scenarios. In images captured by UAVs, high-resolution characteristics and tiny-sized objects complicate object detection. Under limitations of computing and time resources, it is difficult for deep learning-based algorithms to directly train models based on high-resolution images. Generic algorithms scale the input images to a fixed size to reduce computing and time costs, which inevitably causes a reduction in the detail level of features. Moreover, the difference between the theoretical receptive field and the effective receptive field is intensified for tiny objects, further leading to the reduction in positive target samples [39,40].
Genetic models [41] show much better performance on large and medium objects than on tiny objects. To improve performance in tiny object detection, data augmentation [42,43,44,45] and feature fusion [46,47] have been proposed to enhance the learning and presentation of features. However, the details that can be learned in the receptive field for tiny objects are insufficient; therefore, the problem of poor and weak features cannot be effectively overcome. A solution is represented by generative adversarial networks (GANs) [47], which transform the original poor and weak features of tiny objects into highly discriminative ones by introducing fine-grained details from lower-level layers.
In tiny object detection tasks in maritime SAR, the negative effect of sun glint on object texture needs to be addressed. When objects are concealed by sun glint, model performance is significantly poorer due to their weak features. In real-world SAR tasks, it is more important to find objects reliably, rather than accurately localizing them [48]. According to the evident clustering in maritime object distribution, we developed a two-step strategy to detect tiny objects in maritime SAR. The results of HDetect-VS can be used as prior knowledge to improve the detection performance of existing deep learning-based methods.

2.2. Image Processing for Glint Suppression

Because of the variable trajectory and view of UAVs, inevitably, there are glint areas with high pixel intensity values in maritime UAV images, which can significantly influence the visual saliency of objects.
Polarizers, the devices conventionally employed for image sensing, can suppress sea surface sun glint noise but can be significantly influenced by the wind speed, wind direction, and solar zenith angle [49]. Moreover, the application of polarizers is generally based on imagery time-domain polarization characteristics [50] and sensor information [51]. Research on glint suppression based on deep learning originates from image restoration, which is used to refine the poor image quality caused by improper exposure of the primary objects in the image; the drawback of this process is the introduction of noise, which can reduce the signal-to-noise ratio and weaken the features and saliency of objects. Therefore, employing image restoration for denoising is not appropriate for tiny human detection in maritime scenarios.
Despite sun glint and objects in maritime scenarios having high pixel intensity values, based on the difference in their spatial distribution and color characteristics, HDetect-VS can suppress the saliency of sun glint and enhance that of objects.

3. Sea Surface Sun Glint Suppression and Tiny Human Object Detection Based on Visual Saliency

3.1. Characteristic Analysis of Maritime UAV Images

In SAR tasks, compared with field scenarios, in maritime scenarios, the background is relatively flat and generally includes the elements of sea surface, glint, and sky. The visual characteristics of the background and objects in maritime UAV images are as follows:
  • For the areas of sea surface and sky, there is almost no large-area shading caused by occlusion. The color distribution of the whole image is uniform, with few color mutations.
  • Sun glint on the sea surface, which is due to the variable trajectory and view of UAVs, appears white in RGB images. The visual saliency of tiny objects concealed by sun glint is decreased significantly. However, the color and brightness of maritime objects are different from those of sun glint and background.
  • Because of the large field of view of UAVs, maritime objects generally present a clustered spatial distribution.
The characteristics at the pixel level reflecting the above visual features are as follows:
  • The pixel intensity values of the sea surface and sky change continuously and evenly on the grayscale, and those of objects such as human targets and boats differ from those of the background, which can form pixel peaks or valleys.
  • The pixel intensity values of sun glint on the grayscale and in different RGB channels are close to 255. Objects also have high pixel intensity values on the grayscale, and when they are concealed by sun glint, it is difficult to separate them from glint noise on the grayscale. However, tiny objects have different pixel distributions in the RGB channels compared with sun glint.
As shown in Figure 2, the high pixel intensity values of sun glint and sky on the grayscale significantly interfere with object detection based on pixel saliency. Against backgrounds that appear as pixel surfaces, objects have much higher pixel intensity values and are well distinguishable, but when they are concealed by sun glint, their visual saliency decreases significantly. Therefore, the difficult task of separating the salient points of objects and glint to achieve tiny object detection based on visual saliency needs to be addressed.

3.2. Sea Surface Sun Glint Suppression and Object Saliency Enhancement

3.2.1. Glint Suppression Based on HSV-Gray Method

Because of the variable trajectory of UAVs, sea surface sun glint inevitably appears in visible images. To achieve object saliency enhancement, the high pixel intensity values of glint and sky should be suppressed based on the characteristics of pixel distribution in the RGB channels.
Both glint and objects have high pixel intensity values on the grayscale, which cannot reflect the color characteristics of maritime objects. However, the difference in the pixel distributions of objects and glint noise in the RGB channels is significant. Specifically, in different RGB channels, a colored object has distinct pixel intensity values, while white glint presents no significant differences, with all values being close to 255.
In the HSV (Hue, Saturation, and Value) model, based on human vision, different HSV channels numerically represent the color characteristics of objects. Therefore, the saliency of objects can be enhanced and that of glint noise can be effectively suppressed by using a grayscale mapped from the HSV model (HSV-gray).
As shown in Figure 3, different colors have distinct values in the HSV channels. The values of white are zero in the H and S channels and maximum in the V channel, while the value of black is zero in all channels. Red and green have maximum values in the S and V channels and different values in the H channel because of the different color properties. We map the H, S, and V channels to the R, G, and B channels, respectively. HSV-gray is calculated according to the mapped HSV model as a common grayscale calculated based on the RGB model.
On the grayscale calculated based on the RGB model (RGB-gray) and mapped HSV model (HSV-gray), the pixel intensity value of black is zero. The pixel intensity values of white are 9.69 in HSV-gray compared with 255 in RGB-gray. Red and green have much higher pixel intensity values compared with white in HSV-gray. Therefore, HSV-gray can be applied to enhance the saliency of objects concealed by the glint noise.
The pixel intensity values are calculated based on HSV-gray, and the mapped results are presented as PHSV-gray. The relationship between PHSV-gray and the pixel intensity values in different RGB channels is shown in Figure 4. Individual colors have distinct differences in pixel intensity values in different RGB channels. They present much higher PHSV-gray values than the colors that have similar values in the RGB model. Therefore, glint with similar values in different RGB channels has a much lower PHSV-gray value than objects with bright colors, which allows the saliency of the glint area to be suppressed.

3.2.2. Maritime Background Suppression Based on Large-Scale Gaussian Convolution

Because of the view and trajectory variation of UAVs, the sea surface and sky, forming the background with high pixel intensity values, significantly influence the saliency of objects. To suppress this effect, the concept of pixel surface based on large-scale Gaussian Convolution was developed.
Gaussian Convolution is the weighted average of the intensity of adjacent positions, with weights decreasing with the spatial distance from the center position (p). The influence that one pixel has on another thus depends only on their distance (r) in the image [52]. As shown in Equation (1), an image I filtered with Gaussian Convolution is given as GC[I]p, and the weight for pixel q is defined as Gσ(||p − q||) as follows:
G C I p = q S G σ p - q I q G σ r = 1 2 π σ 2 exp r 2 2 σ 2
where Gσ(r) denotes the 2D Gaussian kernel, σ is a parameter defining the standard deviation, and S represents all the possible image locations.
The weight and the spatial distance can be changed by adjusting the Gaussian kernel size (rmax) and the standard deviation (σ). The edge and saliency of an object are lost with high rmax and σ because averaging is performed over a much larger area. Then, the image is blurred to obtain a smooth pixel surface (Pgauss).
H RGB = H I RGB P HSV - gray = G r a y H RGB
P g a u s s = G C H R G B
P s a l i e n c y = 255 P H S V - g r a y P g a u s s max P H S V - g r a y P g a u s s
As shown in Equation (2), the HSV mapping function, H(I), is applied to the visible image (IRGB) to obtain the mapped HSV image (HRGB). The mapped gray intensity values (PHSV-gray) are calculated with a common grayscale function. The representations of PHSV-gray are shown as the first column in Figure 5. Large-scale Gaussian Convolution is applied to the maritime UAV images according to the object size, and the standard deviation (σ) is much larger than the latter. The 4σ Gaussian kernel size is applied to obtain the Gaussian pixel surface (Pgauss) given in Equation (3). The value of σ is about 2 or 3 times that of tiny objects. In this study, we set σ = 40. The pixel surface results are shown in the second column in Figure 5. The difference between PHSV-gray and Pgauss is object saliency, Psaliency, given in Equation (4). The representations of Psaliency are shown in the third column in Figure 5.
As shown in Figure 5, PHSV-gray shows that the color characteristics of an object can be effectively highlighted, and Pgauss demonstrates that the high pixel values around objects can be effectively suppressed with large-scale Gaussian Convolution. Finally, Psaliency-3D illustrates that the pixel intensity values of background such as the sea surface and sky can be significantly suppressed and tiny human targets highlighted.

3.3. Maritime Object Detection Based on Two-Step Clustering

As shown in the representation of object saliency in Figure 5, the saliency of the maritime tiny human objects is relatively low compared with that of boats and other objects. In addition, the pixel intensity values of the set of background noise signals are much higher and close to the saliency of the targets. To perform object localization, an adaptive threshold is applied for preliminarily screening of the salient points in the whole image. Namely, localization is initially conducted based on the pixel intensity values greater than the adaptive threshold; then, the salient points of objects and partially strong noise are separated by employing two-step clustering to achieve maritime tiny object detection based on visual saliency.
The salient points identified based on the adaptive threshold mainly comprise object and noise salient points, where the latter can be further divided into glint noise and background noise. Using a lower threshold can generally identify the salient points of tiny human objects, but a set of background noise signals with higher pixel intensity values can also be introduced, as shown in Figure 6b.
The differences in spatial distribution characteristics between object and noise salient points can be used for maritime UAV image denoising. The spatial distribution characteristics of different salient points are as follows: Generally, background noise salient points are represented as outliers, as the amount of background noise is small and sparsely distributed in the whole image. The amount of glint noise is much larger and centrally and uniformly distributed in the area, with high pixel intensity values; however, the density of glint noise in the area is low. By comparing the uniform and sparse distribution characteristics of glint and background noise, respectively, we can conclude that their salient points are in relatively high quantity and their density characteristics are well defined.
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is an unsupervised clustering method based on minimum density estimation [53]. All the points are clustered by setting two parameters of the algorithm: the distance radius (ε) and the minimum quantity of clustered points (Pmin). Points that do not belong to any clusters can be considered outliers.
Background noise, glint noise, and objects have different spatial distribution characteristics. Therefore, two-step DBSCAN clustering is carried out to successively eliminate background and glint noise.
The first step of clustering, based on large ε and small Pmin, is to eliminate background noise that is sparsely distributed in the whole image. ε = 150 and Pmin = 50 were used in this study for this first step. A large ε value can identify the points distributed in local areas such as object clusters and glint clusters. The Pmin for this step is much smaller than those of object and glint clusters but larger than that of the background cluster. Thus, the points of background noise are too far from each other to satisfy the requirement of Pmin within the distance of ε for being clustered.
The second step of clustering, based on small ε and large Pmin, is to eliminate glint noise, which is centrally and uniformly distributed. ε = 2 and Pmin = 20 were used in this study for this second step. A small ε can identify object clusters with high density. For this step, ε is smaller than the distance among glint noise points. The points of glint noise are too far from each other to satisfy the requirement of Pmin within the distance of ε for being clustered.
As shown in Figure 6c, the salient points of the objects in the original visible image were effectively separated by using two-step DBSCAN clustering. The area of the objects, especially including tiny human targets, is indicated by the red box. It is essential to appropriately set the parameters ε and Pmin in two-step DBSCAN clustering, since objects are generally concealed by sun glint and background noise.

4. Experiments

4.1. Dataset

Experiments were conducted on the dataset of Object Detection V2 of SeaDronesSee [27], which is a large-scale dataset aimed at helping to develop systems for SAR by using UAVs in maritime scenarios. The dataset includes high-resolution visible images captured by a UAV under different field-of-view and flight height conditions, with 8930 images for training and 1547 images for validation. There are five classes in the dataset, i.e., swimmer, boat, jet ski, lifesaving appliances, and buoy, where the swimmer class represents about 64% of the total and most of the objects are tiny objects.

4.2. Experimental Settings

The experiments included two parts: tiny object detection based on visual saliency (HDetect-VS) was conducted on the training and validation sets of Object Detection V2; then, object detection based on YOLOX [54] was conducted based on the results of HDetect-VS. This experiment was carried out on a computer with an NVIDIA A30 (NVIDIA, Santa Clara, CA, USA) and training was performed with PyTorch (v.3.8.1) [55].
Given the wide field of view of UAVs, maritime objects are generally densely distributed in certain areas. The proposed HDetect-VS method can be used to perform area localization, consisting in determining boxes that include maritime multi-scale objects. For the evaluation of the algorithm, we employed the Aerial Detection Ratio (ADR) between the number of objects in the box and all ground-truth objects. As shown in Figure 7, there are three boxes in the results: a mismatched box with no objects, an error-matched box with part of the objects, and a well-matched box with all the objects. Considering the size of an input image in deep learning-based object detection, the sizes of the boxes were set to multiples of 640 × 640.

4.3. Analysis of Experimental Results

  • Object detection based on visual saliency (HDetect-VS)
The results of the experiment conducted on Object Detection V2, shown in Table 1, indicate that the ADRs for all classes and for tiny human targets were over 95% and over 96%, respectively, on both the training and validation sets. This indicates that the proposed method based on glint suppression and pixel surface can effectively enhance the saliency of objects. The density characteristics of maritime object distribution can be utilized to perform tiny human target detection.
First, to achieve the suppression of sea surface sun glint, we applied the HSV-gray method to effectively suppress the high pixel intensity values for areas that presented white in the RGB images. However, since the boat objects in SeaDronesSee also have white characteristics, their saliency was also slightly suppressed, resulting in a low ADR for boats. On the contrary, the color characteristics of lifesaving appliances and jet ski objects were much more defined, resulting in relatively high ADRs.
Second, to comprehensively evaluate the effectiveness of HDetect-VS results as prior knowledge, as shown in Table 2, objects that fail to be detected should be considered in the result of the deep learning-based algorithm.
The results of tiny object detection for SAR obtained with different algorithms are compared in Figure 8.
Improved Local Contrast Measure (ILCM) [15], MS-AAGD, and HDetect-VS were used to perform tiny object detection in a normal maritime environment. The performance and edge quality of the objects detected with ILCM were better than the results of MS-AAGD and HDetect-VS. When sun glint appeared in an image, ILCM could not distinguish between glint noise and objects, and when the objects were concealed by sun glint, as in the Glint images in Figure 6 and Figure 8, ILCM failed to detect them, generating false alarms.
Compared with the ILCM algorithm, MS-AAGD slightly suppressed the dense glint in Glint 1 and Glint 2 in Figure 8. However, the effect of MS-AAGD on scattered glint suppression was comparatively small, as illustrated for Glint 3 in Figure 8. It should be noted that MS-AAGD is sensitive to the edge of tiny objects but not to their color characteristics, as illustrated for Glint 3 in Figure 8, in which the buoy was not detected.
Our proposed HDetect-VS algorithm achieved tiny object detection in strong glint, as shown in Figure 6 and Figure 8. This indicates that the performance of HDetect-VS is robust and effective for maritime SAR. The edge quality of the detected objects was slightly poorer compared with ILCM and MS-AAGD. However, in maritime SAR, it is more important to find tiny objects concealed by strong noise signals than accurately localize them.
Figure 8. Comparison of results of tiny object detection algorithms for maritime SAR. Yellow points in Glint Crop of input images represent locations of tiny objects.
Figure 8. Comparison of results of tiny object detection algorithms for maritime SAR. Yellow points in Glint Crop of input images represent locations of tiny objects.
Applsci 14 05260 g008
  • Benchmark experiment of object detection based on YOLOX (wo-Prior)
This experiment was conducted on Object Detection V2 (wo-Prior), aiming to provide a benchmark for object detection conducted on the results obtained with HDetect-VS.
In Table 2, we report the results obtained with wo-Prior for maritime object detection based on the multi-scale YOLOX model with different input image sizes. The variation in 0.5–0.95 mAP was about 0.5–1 points. By increasing the input image size from 6402 to 10242, for different classes, AP increased by 4–8 points and mAP by 4 points. It can be noted that increasing the level of detail of objects is more conducive to performance improvement compared with increasing the size of the feature map based on a multi-scale model. Moreover, the improvement for tiny objects is much higher than that for medium and large objects.
The performance ceiling for tiny objects for a certain input image size is much lower than that for medium and large objects. Therefore, increasing the level of detail of objects is essential for performance improvement in tiny object detection under conditions of a wide field of view.
  • Object detection on prior results based on YOLOX (w-Prior)
The experiment was conducted based on the area results obtained with HDetect-VS (w-Prior), aiming to effectively increase the details of tiny objects.
As shown for w-Prior in Table 2, maritime object detection performance based on the multi-scale model with different input image sizes showed significant improvements. The improvement for 6402 images was much higher than that for 10242 images. The improvement in 0.5 mAP was relatively high: 6 points for the average and 10.3 points for the maximum. The improvement in 0.5–0.95 mAP was about 4–6 points due to the improvement for tiny objects, such as 13–17 and 2–6 AP points for the lifesaving appliances and swimmer classes, respectively. By contrast, the improvements in AP for the boat and jet ski classes were much lower.
Table 2. Deep learning method results of maritime object detection on prior results obtained with our methodology.
Table 2. Deep learning method results of maritime object detection on prior results obtained with our methodology.
ModelmAPAP
ExperimentSizeYOLOXIoU: 0.5IoU: 0.5–0.95SwimmerBoatJet SkiLifesaving AppliancesBuoy
wo-Prior6402s79.2 45.5 31.62 69.58 52.85 24.89 48.46
m77.5 46.5 33.21 71.23 55.93 22.48 49.90
l76.9 47.0 32.93 72.56 57.34 21.60 50.59
10242s82.7 50.7 37.10 74.05 56.33 29.27 56.66
m81.0 50.9 38.35 74.47 56.94 26.18 58.38
l81.3 51.3 37.96 75.41 58.28 27.34 57.33
w-Prior6402s85.3 51.7 36.69 70.94 56.85 38.91 54.88
m83.7 50.9 37.70 70.24 57.61 35.34 53.50
l87.2 53.2 38.90 72.91 60.07 36.48 57.62
10242s87.1 54.7 39.41 72.52 59.33 42.86 59.55
m88.1 54.8 40.00 72.68 58.61 43.78 59.11
l89.3 56.2 41.12 74.28 60.46 45.16 60.15
w-Prior
Total
6402s81.7 49.5 35.31 67.00 56.67 38.91 49.69
m80.1 48.7 36.28 66.34 57.43 35.34 48.43
l83.5 50.9 37.44 68.86 59.88 36.48 52.17
10242s83.4 52.4 37.93 68.49 59.15 42.86 53.91
m84.3 52.5 38.50 68.64 58.42 43.78 53.52
l85.5 53.8 39.58 70.15 60.27 45.16 54.46
To comprehensively evaluate the effect of previously obtained area results on deep learning algorithms, the objects that could not be previously detected were considered, as shown in the results of w-Prior Total in Table 2. The improvement in object detection with 6402 images was relatively high. The improvement in 0.5 mAP was about 2.5–6.6 points, and that in 0.5–0.95 mAP was about 2.2–4 points. The lifesaving appliances and swimmer classes were improved by about 13–15 and 3.1–4.5 AP points, respectively. The ADR for the boat class was lower than that for the other classes due to the use of the HSV-gray method, and the reduction for this class was about 3 AP points.
Compared with the field and city environments, the maritime environment is more complex for tiny object detection tasks. In particular, the strong glint noise caused by light reflection significantly affects the imaging quality of and detection performance on tiny objects. To improve tiny human target detection performance under limited computing resources, deploying HDetect-VS in edge equipment on UAVs should be researched in the future. Different lighting conditions, variable flight paths, and flight attitudes might affect the robustness of the algorithm; thus, robustness improvement in real maritime scenarios under the above harsh conditions will be considered in our future work.

5. Conclusions

In this study, we argue that the use of visible images is more effective in tiny human target detection based on visual saliency in noisy maritime environments than that of infrared images. Sea surface sun glint can significantly deteriorate the performance of pixel-level detection algorithms, such as LCM-based algorithms. To this end, HDetect-VS is proposed to suppress strong noise and enhance the saliency of tiny human targets. The color characteristics of glint noise and objects, which are not taken into account in conventional pixel-level algorithms, are introduced into the HSV-gray method to suppress glint noise and retain the saliency of objects. The pixel intensity value distribution is presented as the pixel intensity surface calculated with large-scale Gaussian Convolution. For maritime objects with specific spatial distribution characteristics, two-step clustering can be used to separate tiny objects from sun glint and background noise. Moreover, the results of HDetect-VS can be used as prior knowledge in deep learning-based algorithms to significantly improve object detection performance. Experiments on the SeaDronesSee dataset showed that the proposed HDetect-VS method is robust and effective in maritime tiny human object detection.

Author Contributions

Conceptualization, Z.F., F.N. and J.S.; methodology, Z.F.; software, Z.F.; validation, Z.F., Y.X. and D.D.; formal analysis, Z.F.; investigation, Z.F. and Y.X.; resources, Z.F.; data curation, Z.F. and D.D.; writing—original draft preparation, Z.F.; writing—review and editing, Z.F.; visualization, Z.F.; supervision, F.N., L.M. and J.S.; project administration, F.N. and J.S.; funding acquisition, F.N., L.M. and J.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding authors.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Panagopoulos, S.; Soraghan, J.J. Small-target detection in sea clutter. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1355–1361. [Google Scholar] [CrossRef]
  2. Jian, L.; Wen, G. Maritime target detection and tracking. In Proceedings of the 2019 IEEE 2nd International Conference on Automation, Electronics and Electrical Engineering (AUTEEE), Shenyang, China, 22–24 November 2019; pp. 309–314. [Google Scholar]
  3. Zhao, H.; Zhang, H.; Zhao, Y. YOLOv7-sea: Object detection of maritime UAV images based on improved YOLOv7. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), Waikoloa, HI, USA, 3–7 February 2023; pp. 233–238. [Google Scholar]
  4. Ding, H.; Liu, N.; Dong, Y.; Chen, X. Overview and prospects of radar sea clutter measurement experiments. J. Radars 2019, 8, 281–302. [Google Scholar]
  5. Mou, X.; Chen, X.; Guan, J.; Chen, B.; Dong, Y. Marine Target detection based on improved Faster R-CNN for navigation radar PPI images. In Proceedings of the 2019 International Conference on Control, Automation and Information Sciences (ICCAIS), Chengdu, China, 23 April 2020; pp. 1–5. [Google Scholar]
  6. Su, N.; Chen, X.; Guan, J.; Huang, Y. Maritime target detection based on radar graph data and graph convolutional network. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  7. Goddijn-Murphy, L.; Williamson, B.J.; McIlvenny, J.; Corradi, P. Using a UAV thermal infrared camera for monitoring floating marine plastic litter. Remote Sens. 2022, 14, 3179. [Google Scholar] [CrossRef]
  8. Sun, X.; Xiong, W.; Shi, H. A novel spatiotemporal filtering for dim small infrared maritime target detection. In Proceedings of the 2022 International Symposium on Electrical, Electronics and Information Engineering (ISEEIE), Chiang Mai, Thailand, 25–27 February 2022; pp. 195–201. [Google Scholar]
  9. Ye, J.; Li, C.; Wen, W.; Zhou, R.; Reppa, V. Deep learning in maritime autonomous surface ships: Current development and challenges. J. Marine. Sci. Appl. 2023, 22, 584–601. [Google Scholar] [CrossRef]
  10. Yang, P.; Dong, L.; Xu, W. Detecting small infrared maritime targets overwhelmed in heavy waves by weighted multidirectional gradient measure. IEEE Geosci. Remote Sens. Lett. 2022, 19, 1–5. [Google Scholar] [CrossRef]
  11. Wang, B.; Motai, Y.; Dong, L.; Xu, W. Detecting infrared maritime targets overwhelmed in sun glitters by antijitter spatiotemporal saliency. IEEE Trans. Geosci. Remote Sens. 2019, 57, 5159–5173. [Google Scholar] [CrossRef]
  12. Yang, P.; Dong, L.; Xu, W. Infrared Small Maritime Target Detection Based on Integrated Target Saliency Measure. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 2369–2386. [Google Scholar] [CrossRef]
  13. Wang, B.; Dong, L.; Zhao, M.; Wu, H.; Xu, W. Texture orientation-based algorithm for detecting infrared maritime targets. Appl Opt. 2015, 54, 4689–4697. [Google Scholar] [CrossRef]
  14. Bai, X.; Zhou, F. Analysis of new top-hat transformation and the application for infrared dim small target detection. Pattern Recognit. 2010, 43, 2145–2156. [Google Scholar] [CrossRef]
  15. Han, J.; Ma, Y.; Zhou, B.; Fan, F.; Liang, K.; Fang, Y. A Robust Infrared Small Target Detection Algorithm Based on Human Visual System. IEEE Geosci. Remote Sens. Lett. 2014, 11, 2168–2172. [Google Scholar]
  16. Wei, Y.; You, X.; Li, H. Multiscale patch-based contrast measure for small infrared target detection. Pattern Recognit. J. Pattern Recognit. Soc. 2016, 58, 216–226. [Google Scholar] [CrossRef]
  17. Yao, S.; Chang, Y.; Qin, X. A Coarse-to-Fine Method for Infrared Small Target Detection. IEEE Geosci. Remote Sens. Lett. 2019, 16, 256–260. [Google Scholar] [CrossRef]
  18. Zhang, L.; Peng, Z. Infrared Small Target Detection Based on Partial Sum of the Tensor Nuclear Norm. Remote Sens. 2019, 11, 382. [Google Scholar] [CrossRef]
  19. Gao, J.; Lin, Z.; An, W. Infrared Small Target Detection Using a Temporal Variance and Spatial Patch Contrast Filter. IEEE Access 2019, 7, 32217–32226. [Google Scholar] [CrossRef]
  20. Gao, C.; Wang, L.; Xiao, Y.; Zhao, Q.; Meng, D. Infrared small-dim target detection based on Markov random field guided noise modeling. Pattern Recogn. 2018, 76, 463–475. [Google Scholar] [CrossRef]
  21. Silva, H.; Almeida, J.M.; Lopes, F.; Ribeiro, J.P.; Freitas, S.; Amaral, G.; Almeida, C.; Martins, A.; Silva, E. UAV trials for multi-spectral imaging target detection and recognition in maritime environment. In Proceedings of the OCEANS 2016 MTS/IEEE Monterey, Monterey, CA, USA, 19–23 September 2016; pp. 1–6. [Google Scholar]
  22. Mohsan, S.A.H.; Othman, N.Q.H.; Li, Y.; Alsharif, M.H.; Khan, M.A. Unmanned aerial vehicles (UAVs): Practical aspects, applications, open challenges, security issues, and future trends. Intell. Serv. Robot. 2023, 16, 109–137. [Google Scholar] [CrossRef]
  23. Gonçalves, L.; Damas, B. Automatic detection of rescue targets in maritime search and rescue missions using UAVs. In Proceedings of the 2022 International Conference on Unmanned Aircraft Systems (ICUAS), Dubrovnik, Croatia, 21–24 June 2022; pp. 1638–1643. [Google Scholar]
  24. Li, Q.; Jussi, T.; Jorge, P.Q.; Tuan, N.G.; Moncef, G.; Hanun, T.; Jenni, R.; Tomi, W. Towards Active Vision with UAVs in Marine Search and Rescue: Analyzing Human Detection at Variable Altitudes. In Proceedings of the 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Abu Dhabi, United Arab Emirates, 4–6 November 2020; pp. 65–70. [Google Scholar]
  25. Duan, H.; Xu, X.; Deng, Y.; Zeng, Z. Unmanned aerial vehicle recognition of maritime small-target based on biological eagle-eye vision adaptation mechanism. IEEE Trans. Aerosp. Electron. Syst. 2021, 57, 3368–3382. [Google Scholar] [CrossRef]
  26. Bhuiya, M.S.R.; Islam, N.; Drishty, A.S.; Akash, U.D.; Saha, S.S.; Chakrabarty, A.; Hossain, S. Surveillance in maritime scenario using deep learning and swarm intelligence. In Proceedings of the 2022 25th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, 17–19 December 2022; pp. 569–574. [Google Scholar]
  27. Varga, L.A.; Kiefer, B.; Messmer, M.; Zell, A. SeaDronesSee: A maritime benchmark for detecting humans in open water. In Proceedings of the 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 3–8 January 2022; pp. 3686–3696. [Google Scholar]
  28. Bovcon, B.; Muhovič, J.; Vranac, D.; Mozetic, D.; Pers, J.; Kristan, M. MODS—A USV-oriented object detection and obstacle segmentation benchmark. IEEE Trans. Intell. Transp. Syst. 2021, 23, 13403–13418. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Tao, Q.; Yin, Y. A Lightweight Man-Overboard Detection and Tracking Model Using Aerial Images for Maritime Search and Rescue. Remote Sens. 2024, 16, 165. [Google Scholar] [CrossRef]
  30. Zhang, Y.; Yin, Y.; Shao, Z. An Enhanced Target Detection Algorithm for Maritime Search and Rescue Based on Aerial Images. Remote Sens. 2023, 15, 4818. [Google Scholar] [CrossRef]
  31. Zhang, L.; Zhang, N.; Shi, R.; Wang, G.; Xu, Y.; Chen, Z. SG-Det: Shuffle-GhostNet-Based Detector for Real-Time Maritime Object Detection in UAV Images. Remote Sens. 2023, 15, 3365. [Google Scholar] [CrossRef]
  32. Chen, C.L.P.; Li, H.; Wei, Y.; Xia, T.; Tang, Y. A local contrast method for small infrared target detection. IEEE Trans. Geosci. Remote Sens. 2014, 52, 574–581. [Google Scholar] [CrossRef]
  33. Han, J.; Ma, Y.; Huang, J.; Mei, X.; Ma, J. An infrared small target detecting algorithm based on human visual system. IEEE Geosci. Remote Sens. Lett. 2016, 13, 452–456. [Google Scholar] [CrossRef]
  34. Bai, X.; Bi, Y. Derivative entropy-based contrast measure for infrared small-target detection. IEEE Trans. Geosci. Remote Sens. 2018, 56, 2452–2466. [Google Scholar] [CrossRef]
  35. Chan, Y. Comprehensive comparative evaluation of background subtraction algorithms in open sea environments. Comput. Vis. Image Underst. 2021, 202, 103101. [Google Scholar] [CrossRef]
  36. Shi, Y.; Wei, Y.; Yao, H.; Pan, D.; Xiao, G. High-boost-based multiscale local contrast measure for infrared small target detection. IEEE Geosci. Remote Sens. Lett. 2018, 15, 33–37. [Google Scholar] [CrossRef]
  37. Yang, J.; Gu, Y.; Sun, Z.; Cui, Z. A small infrared target detection method using adaptive local contrast measurement. In Proceedings of the 2019 IEEE International Instrumentation and Measurement Technology Conference (I2MTC), Auckland, New Zealand, 20–23 May 2019; pp. 1–6. [Google Scholar]
  38. Moradi, S.; Moallem, P.; Sabahi, M.F. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm. Infrared Phys. Technol. 2018, 89, 387–397. [Google Scholar] [CrossRef]
  39. Xu, C.; Wang, J.; Yang, W.; Yu, H.; Yu, L.; Xia, G. RFLA: Gaussian receptive field based label assignment for tiny object detection. In Proceedings of the 17th European Conference on Computer Vision 2022, Tel Aviv, Israel, 23–27 October 2022; pp. 526–543. [Google Scholar]
  40. Xu, C.; Wang, J.; Yang, W.; Yu, L. Dot distance for tiny object detection in aerial images. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Nashville, TN, USA, 19–25 June 2021; pp. 1192–1201. [Google Scholar]
  41. Arani, E.; Gowda, S.; Mukherjee, R.; Magdy, O.; Kathiresan, S.K.; Zonooz, B. A Comprehensive Study of Real-Time Object Detection Networks Across Multiple Domains: A Survey. arXiv 2022, arXiv:2208.10895. [Google Scholar]
  42. Yu, X.; Gong, Y.; Jiang, N.; Ye, Q.; Han, Z. Scale match for tiny person detection. In Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass, CO, USA, 1–5 March 2020; pp. 1246–1254. [Google Scholar]
  43. Kisantal, M.; Wojna, Z.; Murawski, J. Augmentation for small object detection. In Proceedings of the 9th International Conference on Advances in Computing and Information Technology, Sydney, Australia, 19 February 2019; pp. 119–133. [Google Scholar]
  44. Chen, C.; Zhang, Y.; Lv, Q.; Wei, S.; Wang, X.; Sun, X.; Dong, J. RRNet: A hybrid detector for object detection in drone-captured images. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Republic of Korea, 27–28 October 2019; pp. 100–108. [Google Scholar]
  45. Chen, Y.; Zhang, P.; Li, Z.; Li, Y.; Zhang, X.; Qi, L.; Sun, J.; Jia, J. Dynamic scale training for object detection. Computer Vision and Pattern Recognition. arXiv 2020, arXiv:2004.12432. [Google Scholar]
  46. Kong, T.; Yao, A.; Chen, Y.; Sun, F. HyperNet: Towards accurate region proposal generation and joint object detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 845–853. [Google Scholar]
  47. Lin, T.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature Pyramid networks for object detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
  48. Pyrrö, P.; Naseri, H.; Jung, A. Rethinking drone-based search and rescue with aerial person detection. arXiv 2021, arXiv:2111.09406. [Google Scholar]
  49. Chen, W.; Sun, X.; Qiao, Y.; Chen, Z.; Yin, Y. Polarization detection of marine targets covered in glint. Infrared Laser Eng. 2017, 46, 63–68. [Google Scholar]
  50. Liang, J.; Wang, X.; He, S.; Jin, W. Sea surface clutter suppression method based on time-domain polarization characteristics of Sun glint. Opt. Express 2019, 27, 2142–2158. [Google Scholar] [CrossRef] [PubMed]
  51. Zhao, H.; Ji, Z.; Zhang, Y.; Sun, X.; Song, P.; Li, Y. Mid-infrared imaging system based on polarizers for detecting marine targets covered in Sun glint. Opt. Express 2016, 24, 16396–16409. [Google Scholar] [CrossRef] [PubMed]
  52. Paris, S.; Kornprobst, P.; Tumblin, J.; Durand, F. From Gaussian convolution to bilateral filtering. Bilater. Filter. Theory Appl. 2009, 4, 4–10. [Google Scholar]
  53. Schubert, E.; Sander, J.; Ester, M.; Kriegel, H.P.; Xu, X. DBSCAN revisited, revisited: Why and how you should (still) use DBSCAN. ACM Trans. Database Syst. 2017, 42, 1–21. [Google Scholar] [CrossRef]
  54. Zheng, G.; Liu, S.; Wang, F.; Li, Z.; Sun, J. YOLOX: Exceeding YOLO Series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar]
  55. Adam, P.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 12–14 December 2019. [Google Scholar]
Figure 2. The characteristics of pixel intensity value distributions of maritime UAV images. (a) Visible images and (b) 3D plots of pixel intensity values on the grayscale.
Figure 2. The characteristics of pixel intensity value distributions of maritime UAV images. (a) Visible images and (b) 3D plots of pixel intensity values on the grayscale.
Applsci 14 05260 g002
Figure 3. The calculation of HSV-gray according to the mapped HSV model.
Figure 3. The calculation of HSV-gray according to the mapped HSV model.
Applsci 14 05260 g003
Figure 4. The relationship between PHSV-gray and pixel intensity values in the different RGB channels.
Figure 4. The relationship between PHSV-gray and pixel intensity values in the different RGB channels.
Applsci 14 05260 g004
Figure 5. The pixel values of the images for HSV-gray (PHSV-gray), pixel surface (Pgauss), and object saliency (Psaliency-3D).
Figure 5. The pixel values of the images for HSV-gray (PHSV-gray), pixel surface (Pgauss), and object saliency (Psaliency-3D).
Applsci 14 05260 g005
Figure 6. The separation of salient points of objects and noise based on two-step clustering. (a) Maritime image with sun glint; (b) Salient points of object and noise; (c) Salient points of objects (clustered).
Figure 6. The separation of salient points of objects and noise based on two-step clustering. (a) Maritime image with sun glint; (b) Salient points of object and noise; (c) Salient points of objects (clustered).
Applsci 14 05260 g006
Figure 7. The boxes obtained with HDetect-VS. The yellow points represent the locations of tiny objects (humans and boats in this figure).
Figure 7. The boxes obtained with HDetect-VS. The yellow points represent the locations of tiny objects (humans and boats in this figure).
Applsci 14 05260 g007
Table 1. Results of tiny object detection based on visual saliency (Object Detection V2).
Table 1. Results of tiny object detection based on visual saliency (Object Detection V2).
ClassNumber of
Ground-Truth Objects
Number of
Detected Objects
ADR
Swimmer6206597396.25%
Boat2214209194.44%
Jet ski32031999.69%
Lifesaving appliances330330100.00%
Buoy56050790.54%
Val. total9630922095.74%
Swimmer37,09635,67996.18%
Boat13,02212,10492.95%
Jet ski2330230698.97%
Lifesaving appliances92392099.67%
Buoy4389423796.54%
Train. total57,76055,24695.65%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fei, Z.; Xie, Y.; Deng, D.; Meng, L.; Niu, F.; Sun, J. HDetect-VS: Tiny Human Object Enhancement and Detection Based on Visual Saliency for Maritime Search and Rescue. Appl. Sci. 2024, 14, 5260. https://doi.org/10.3390/app14125260

AMA Style

Fei Z, Xie Y, Deng D, Meng L, Niu F, Sun J. HDetect-VS: Tiny Human Object Enhancement and Detection Based on Visual Saliency for Maritime Search and Rescue. Applied Sciences. 2024; 14(12):5260. https://doi.org/10.3390/app14125260

Chicago/Turabian Style

Fei, Zhennan, Yingjiang Xie, Da Deng, Lingshuai Meng, Fu Niu, and Jinggong Sun. 2024. "HDetect-VS: Tiny Human Object Enhancement and Detection Based on Visual Saliency for Maritime Search and Rescue" Applied Sciences 14, no. 12: 5260. https://doi.org/10.3390/app14125260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop