Motion Saliency Detection for Surveillance Systems Using Streaming Dynamic Mode Decomposition
Abstract
:1. Introduction
- We introduce a new approach to generating motion saliency for surveillance systems, which is fast and memory-efficient for applications with streaming data.
- The spatial–temporal features from video are generated from a sparse reconstruction process using streaming dynamic mode decomposition (s-DMD).
- We compute a motion saliency map from the refinement process using a difference-of-Gaussians (DoG) filter in the frequency domain.
2. Related Works
3. Dynamic Mode Decomposition Background
4. The Proposed Methodology
4.1. Motion Saliency Generation Based on s-DMD
4.2. From Coarse to Fine Motion Saliency Map
Algorithm 1: s-DMD for motion saliency. |
Algorithm 2: Generation of motion saliency map |
5. Experiments Results
5.1. Evaluation Metrics
5.2. Comparision Results of Various State-of-the-Art Methods
5.3. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Liu, T.; Yuan, Z.; Sun, J.; Wang, J.; Zheng, N.; Tang, X.; Shum, H.Y. Learning to detect a salient object. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 353367. [Google Scholar] [CrossRef] [Green Version]
- Liu, Z.; Shi, R.; Shen, L.; Xue, Y.; Ngan, K.N.; Zhang, Z. Unsupervised salient object segmentation based on kernel density estimation and two-phase graph cut. IEEE Trans. Multimed. 2012, 14, 1275–1289. [Google Scholar] [CrossRef]
- Hadizadeh, H.; Bajić, I.V. Saliency-aware video compression. IEEE Trans. Image Process. 2014, 23, 19–33. [Google Scholar] [CrossRef] [PubMed]
- Lei, J.; Wu, M.; Zhang, C.; Wu, F.; Ling, N.; Hou, C. Depth preserving stereo image retargeting based on pixel fusion. IEEE Trans. Multimed. 2017, 19, 1442–1453. [Google Scholar] [CrossRef]
- Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process. 2014, 23, 4270–4281. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Han, S.; Vasconcelos, N. Image compression using object-based regions of interest. In Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006; pp. 3097–3100. [Google Scholar]
- Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
- Harel, C.K.J.; Perona, P. Graph-based visual saliency. In Proceedings of the Neural Information Processing Systems (NIPS), Vancouver, BC, Canada, 4–7 December 2006. [Google Scholar]
- Zhang, L.; Tong, M.; Marks, T.; Shan, H.; Cottrell, G. SUN: A Bayesian framework for saliency using natural statistics. J. Vis. 2008, 8, 1–20. [Google Scholar] [CrossRef] [Green Version]
- Jiang, B.; Zhang, L.; Lu, H.; Yang, C.; Yang, M.-H. Saliency detection via absorbing Markov chain. In Proceedings of the IEEE International Conference on ComputerVision (2013), Sydney, Australia, 1–8 December 2013; pp. 1665–1672. [Google Scholar]
- Cheng, M.; Zhang, G.; Mitra, N.J.; Huang, X.; Hu, S. Global contrast based salient region detection. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 21–25 June 2011; pp. 409–416. [Google Scholar]
- Achanta, R.; Hemami, S.; Estrada, F. Susstrunk, frequency-tuned salient region detection. In Proceedings of the Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1597–1604. [Google Scholar]
- Yeh, H.-H.; Liu, K.-H.; Chen, C.-S. Salient object detection via local saliency estimation and global homogeneity refinement. Pattern Recognit. 2014, 47, 1740–1750. [Google Scholar] [CrossRef]
- Shen, X.; Wu, Y. A unified approach to salient object detection via low rank matrix recovery. In Proceedings of the Computer Vision and Pattern Recognition (CVPR) 2012, Providence, RI, USA, 16–21 June 2012; pp. 853–860. [Google Scholar]
- Goferman, S.; Zelnik-Manor, L.; Tal, A. Context-aware saliency detection. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2376–2383. [Google Scholar] [CrossRef] [Green Version]
- Hou, X.; Zhang, L. Saliency detection: A spectral residual approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 21–26 July 2007; pp. 1–8. [Google Scholar]
- Zhang, L.; Tong, M.; Cottrell, G. SUNDAy: Saliency using natural statistics for dynamic analysis of scenes. In Proceedings of the 31st Annual Cognitive Science Conference, Amsterdam, The Netherlands, 29 July–1 August 2009. [Google Scholar]
- Zhong, S.-H.; Liu, Y.; Ren, F.; Zhang, J.; Ren, T. Video saliency detection via dynamic consistent spatiotemporal attention modelling. In Proceedings of the National Conference of the American Association for Artificial Intelligence, Washington, DC, USA, 14–18 July 2013; pp. 1063–1069. [Google Scholar]
- Mauthner, T.; Possegger, H.; Waltner, G.; Bischof, H. Encoding based saliency detection for videos and images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015, Boston, MA, USA, 7–12 June 2015; pp. 2494–2502. [Google Scholar]
- Wang, W.; Shen, J.; Porikli, F. Saliency-aware geodesic video object segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2015, Boston, MA, USA, 7–12 June 2015; pp. 3395–3402. [Google Scholar]
- Yubing, T.; Cheikh, F.A.; Guraya, F.F.E.; Konik, H.; Trémeau, A. A spatiotemporal saliency model for video surveillance. Cogn. Comput. 2011, 3, 241–263. [Google Scholar] [CrossRef] [Green Version]
- Ren, Z.; Gao, S.; Rajan, D.; Chia, L.; Huang, Y. Spatiotemporal saliency detection via sparse representation. In Proceedings of the 2012 IEEE International Conference on Multimedia and Expo Workshops, Melbourne, Australia, 9–13 July 2012; pp. 158–163. [Google Scholar] [CrossRef]
- Chen, C.; Li, S.; Wang, Y.; Qin, H.; Hao, A. Video saliency detection via spatial-temporal fusion and low-rank coherency diffusion. IEEE Trans. Image Process. 2017, 26, 3156–3170. [Google Scholar] [CrossRef]
- Xue, Y.; Guo, X.; Cao, X. Motion saliency detection using low-rank and sparse decomposition. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 1485–1488. [Google Scholar] [CrossRef]
- Bhattacharya, S.; Venkatesh, K.S.; Gupta, S. Visual saliency detection using spatiotemporal decomposition. IEEE Trans. Image Process. 2018, 27, 1665–1675. [Google Scholar] [CrossRef] [PubMed]
- Wang, W.; Shen, J.; Shao, L. Consistent video saliency using local gradient flow optimization and global refinement. IEEE Trans. Image Process. 2015, 24, 4185–4196. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Kim, H.; Kim, Y.; Sim, J.-Y.; Kim, C.-S. Spatiotemporal saliency detection for video sequences based on random walk with restart. IEEE Trans. Image Process. 2015, 24, 2552–2564. [Google Scholar] [CrossRef] [PubMed]
- Cui, X.; Liu, Q.; Zhang, S.; Yang, F.; Metaxas, D.N. Temporal spectral residual for fast salient motion detection. Neurocomputing 2012, 86, 24–32. [Google Scholar] [CrossRef]
- Alshawi, T. Ultra-fast saliency detection using QR factorization. In Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 3–6 November 2019; pp. 1911–1915. [Google Scholar] [CrossRef]
- Borji, A.; Cheng, M.; Jiang, H.; Li, J. Salient object detection: A benchmark. IEEE Trans. Image Process. 2015, 24, 5706–5722. [Google Scholar] [CrossRef] [Green Version]
- Cong, R.; Lei, J.; Fu, H.; Cheng, M.; Lin, W.; Huang, Q. Review of visual saliency detection with comprehensive information. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 2941–2959. [Google Scholar] [CrossRef] [Green Version]
- Borji, A.; Itti, L. State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 185–207. [Google Scholar] [CrossRef]
- Schmid, P.J.; Sesterhenn, J.L. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 2008. [Google Scholar] [CrossRef] [Green Version]
- Tu, J.H.; Rowley, C.W.; Luchtenburg, D.M.; Brunton, S.L.; Kutz, J.N. On dynamic mode decomposition: Theory and applications. J. Comput. Dyn. 2014. [Google Scholar] [CrossRef] [Green Version]
- Grosek, J.; Kutz, J.N. Dynamic mode decomposition for real-time background/foreground separation in video. arXiv preprint. 2014, arXiv:1404.7592. [Google Scholar]
- Bi, C.; Yuan, Y.; Zhang, J.; Shi, Y.; Xiang, Y.; Wang, Y.; Zhang, R. Dynamic mode decomposition based video shot detection. IEEE Access 2018, 6, 21397–21407. [Google Scholar] [CrossRef]
- Sikha, O.K.; Kumar, S.S.; Soman, K.P. Salient region detection and object segmentation in color images using dynamic mode decomposition. J. Comput. Sci. 2018, 25, 351–366. [Google Scholar] [CrossRef]
- Sikha, O.K.; Soman, K.P. Multi-resolution dynamic mode decomposition-based salient region detection in noisy images. SIViP 2020, 14, 167–175. [Google Scholar] [CrossRef]
- Yu, C.; Zheng, X.; Zhao, Y.; Liu, G.; Li, N. Review of intelligent video surveillance technology research. In Proceedings of the 2011 International Conference on Electronic and Mechanical Engineering and Information Technology, EMEIT 2011, Harbin, China, 12–14 August 2011; pp. 230–233. [Google Scholar] [CrossRef]
- Hemati, M.S.; Williams, M.O.; Rowley, C.W. Dynamic mode decomposition for large and streaming datasets. Phys. Fluids 2014, 26. [Google Scholar] [CrossRef] [Green Version]
- Wang, Y.; Jodoin, P.-M.; Porikli, F.; Konrad, J.; Benezeth, Y.; Ishwar, P. CDnet 2014: An expanded change detection benchmark dataset. In Proceedings of the IEEE Workshop on Change Detection (CDW-2014) at CVPR-2014, Columbus, OH, USA, 23–28 June 2014; pp. 387–394. [Google Scholar]
- Borji, A.; Tavakoli, H.R.; Sihite, D.N.; Itti, L. Analysis of scores, datasets, and models in visual saliency prediction. In Proceedings of the IEEE International Conference on Computer Vision IEEE Computer Society, Sydney, Australia, 1–8 December 2013; pp. 921–928. [Google Scholar]
- Fan, D.-P.; Cheng, M.-M.; Liu, Y.; Li, T.; Borji, A. Structure-measure: A new way to evaluate foreground maps. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4558–4567. [Google Scholar]
- Peters, R.J.; Iyer, A.; Itti, L.; Koch, C. Components of bottom-up gaze allocation in natural images. Vis. Res. 2005, 45, 2397–2416. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- le Meur, O.; le Callet, P.; Barba, D. Predicting visual fixations on video based on low-level visual features. Vis. Res. 2007, 47, 2483–2498. [Google Scholar] [CrossRef] [Green Version]
- Seo, H.J.; Milanfar, P. Non-parametric bottom-up saliency detection by self-resemblance. In Proceedings of the 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Miami, FL, USA, 20–25 June 2009; pp. 45–52. [Google Scholar] [CrossRef] [Green Version]
- Tavakoli, H.R.; Rahtu, E.; Heikkilä, J. Fast and efficient saliency detection using sparse sampling and kernel density estimation. In Proceedings of the 17th Scandinavian conference on Image analysis (SCIA’11), Ystad, Sweden, 23–27 May 2011; Springer: Berlin/Heidelberg, Germany, 2011; pp. 666–675. [Google Scholar]
- Schauerte, B.; Stiefelhagen, R. Quaternion-based spectral saliency detection for eye fixation prediction. In Proceedings of the 12th European Conference on Computer Vision—ECCV 2012, Florence, Italy, 7–13 October 2012; Springer: Berlin/Heidelberg, Germany, 2012; Volume 7573, pp. 116–129. [Google Scholar]
- Kim, J.; Han, D.; Tai, Y.; Kim, J. Salient region detection via high-dimensional color transform and local spatial support. IEEE Trans. Image Process. 2016, 25, 9–23. [Google Scholar] [CrossRef]
- Margolin, R.; Tal, A.; Zelnik-Manor, L. What makes a patch distinct? In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1139–1146. [Google Scholar] [CrossRef] [Green Version]
- Lou, J.; Zhu, W.; Wang, H.; Ren, M. Small target detection combining regional stability and saliency in a color image. Multimed. Tools Appl. 2017, 76, 14781–14798. [Google Scholar] [CrossRef]
- Wloka, C.; Kunić, T.; Kotseruba, I.; Fahimi, R.; Frosst, N.; Bruce, N.D.B.; Tsotsos, J.K. SMILER: Saliency model implementation library for experimental research. arXiv 2018, arXiv:1812.08848. [Google Scholar]
- Li, Y.; Mou, X. Saliency detection based on structural dissimilarity induced by image quality assessment model. J. Electron. Imaging 2019, 28, 023025. [Google Scholar] [CrossRef] [Green Version]
Models | Features | Type | Description |
---|---|---|---|
Zhong et al. [18] | color, orientation, texture, motion features | Fusion model | Dynamic consistent optical flow for motion saliency map |
Mauthner et al. [19] | color, motion features | Encoding-based approach to approximate joint feature distribution | |
Wang et al. [20] | spatial static edges, motion boundary edges | Super-pixels based, geodesic distance to compute the probability for object segmentation | |
Yubing et al. [21] | color, intensity, orientation motion vector field | Motion saliency and stationary saliency are merged with Gaussian distance weights | |
Z. Ren et al. [22] | sparse representation, motion trajectories | Patch-based method, learning the reconstruction coefficients to encode the motion trajectory for motion saliency | |
C. Chen et al. [23] | motion gradient, color gradient | Guide fusion low level saliency map using low-rank coherency | |
Y. Xue et al. [24] | low rank, sparse decomposition | Direct-pipeline model | Stack the temporal slices along X-T and Y-T plane. |
Bhattacharya et al. [25] | spatiotemporal features, color cues | Weighted sum of the sparse features along three orthogonal directions determines the salient regions | |
W. Wang et al. [26] | gradient flow field, local, global contrasts | Gradient flow field incorporates intra-frame and inter-frame information to highlight salient regions. | |
H.Kim et al. [27] | low level cues, motion distinctiveness, temporal consistency, abrupt change | Random walk with restart is used to detect spatially and temporally salient regions |
Category | Video Sequence | No. of Frames | Frame Resolution | Description |
---|---|---|---|---|
Baseline | highway | 1700 | 320 × 240 | A mixture of others category |
office | 2050 | 360 × 240 | ||
pedestrian | 1099 | 360 × 240 | ||
PETS2006 | 120 | 720 × 576 | ||
Dynamic Background | canoe | 1189 | 320 × 240 | Strong background motion like waters, trees |
overpass | 3000 | 320 × 240 | ||
Bad Weather | blizzard | 7000 | 720 × 480 | Poor weather condition like snow, fog |
skating | 3900 | 540 × 360 | ||
Camera Jitter | badminton | 1150 | 720 × 480 | Vibrational cameras in outdoor environment |
traffic | 1570 | 320 × 240 | ||
Intermittent Object Motion | sofa | 2750 | 320 × 240 | Some objects move then stop again |
streetlight | 3200 | 320 × 240 |
Video Sequence | Abbr. | MAE | AUC-Borji | S-Measure | NSS | CC |
---|---|---|---|---|---|---|
highway | HIG | 0.071 | 0.801 | 0.499 | 2.158 | 0.561 |
office | OFF | 0.069 | 0.719 | 0.464 | 1.298 | 0.426 |
pedestrian | PED | 0.130 | 0.659 | 0.658 | 2.245 | 0.400 |
PETS2006 | PET | 0.051 | 0.842 | 0.479 | 3.310 | 0.457 |
canoe | CAN | 0.194 | 0.522 | 0.390 | 0.505 | 0.136 |
overpass | OVE | 0.116 | 0.514 | 0.357 | 0.196 | 0.074 |
blizzard | BLI | 0.017 | 0.526 | 0.344 | 1.094 | 0.167 |
skating | SKA | 0.136 | 0.481 | 0.344 | 0.511 | 0.136 |
sidewalk | SID | 0.345 | 0.483 | 0.149 | 0.071 | 0.111 |
traffic | TRA | 0.054 | 0.581 | 0.294 | 0.859 | 0.230 |
sofa | SOF | 0.101 | 0.623 | 0.459 | 0.991 | 0.305 |
streetlight | STR | 0.852 | 0.500 | 0.064 | 0.002 | 0.010 |
Average | 0.178 | 0.604 | 0.375 | 1.103 | 0.251 |
Methods | HIG | OFF | PED | PET | CAN | OVE | BLI | SKA | SID | TRA | SOF | STR | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ITTI | 0.200 | 0.252 | 0.237 | 0.208 | 0.229 | 0.191 | 0.135 | 0.211 | 0.425 | 0.233 | 0.210 | 0.681 | 0.268 |
SUN | 0.244 | 0.219 | 0.207 | 0.316 | 0.349 | 0.194 | 0.146 | 0.274 | 0.288 | 0.323 | 0.251 | 0.783 | 0.300 |
SSR | 0.245 | 0.265 | 0.249 | 0.309 | 0.110 | 0.251 | 0.354 | 0.383 | 0.360 | 0.122 | 0.310 | 0.777 | 0.311 |
GBVS | 0.213 | 0.238 | 0.206 | 0.169 | 0.242 | 0.252 | 0.130 | 0.250 | 0.417 | 0.237 | 0.211 | 0.677 | 0.270 |
FES | 0.104 | 0.093 | 0.068 | 0.086 | 0.051 | 0.097 | 0.037 | 0.078 | 0.271 | 0.058 | 0.105 | 0.848 | 0.158 |
QSS | 0.222 | 0.139 | 0.134 | 0.199 | 0.259 | 0.224 | 0.100 | 0.170 | 0.342 | 0.169 | 0.187 | 0.784 | 0.244 |
HDCT | 0.112 | 0.100 | 0.079 | 0.113 | 0.044 | 0.110 | 0.052 | 0.118 | 0.240 | 0.062 | 0.169 | 0.781 | 0.165 |
PCA | 0.241 | 0.194 | 0.223 | 0.116 | 0.351 | 0.183 | 0.085 | 0.062 | 0.400 | 0.167 | 0.217 | 0.773 | 0.251 |
RSS | 0.086 | 0.087 | 0.032 | 0.035 | 0.025 | 0.048 | 0.008 | 0.036 | 0.255 | 0.058 | 0.082 | 0.872 | 0.135 |
CVS | 0.111 | 0.126 | 0.079 | 0.134 | 0.046 | 0.135 | 0.058 | 0.082 | 0.263 | 0.074 | 0.151 | 0.750 | 0.167 |
RWRS | 0.184 | 0.253 | 0.116 | 0.143 | 0.064 | 0.152 | 0.152 | 0.077 | 0.232 | 0.071 | 0.143 | 0.731 | 0.193 |
Proposed | 0.071 | 0.069 | 0.130 | 0.051 | 0.194 | 0.116 | 0.017 | 0.136 | 0.345 | 0.054 | 0.101 | 0.852 | 0.178 |
Methods | HIG | OFF | PED | PET | CAN | OVE | BLI | SKA | SID | TRA | SOF | STR | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ITTI | 0.687 | 0.634 | 0.613 | 0.767 | 0.510 | 0.437 | 0.439 | 0.449 | 0.471 | 0.510 | 0.712 | 0.499 | 0.561 |
SUN | 0.629 | 0.621 | 0.562 | 0.677 | 0.467 | 0.547 | 0.423 | 0.442 | 0.491 | 0.499 | 0.685 | 0.500 | 0.545 |
SSR | 0.748 | 0.695 | 0.595 | 0.754 | 0.530 | 0.595 | 0.473 | 0.483 | 0.485 | 0.546 | 0.762 | 0.487 | 0.596 |
GBVS | 0.626 | 0.667 | 0.590 | 0.768 | 0.490 | 0.407 | 0.398 | 0.434 | 0.477 | 0.481 | 0.676 | 0.500 | 0.543 |
FES | 0.567 | 0.676 | 0.493 | 0.745 | 0.472 | 0.488 | 0.262 | 0.422 | 0.491 | 0.499 | 0.645 | 0.503 | 0.522 |
QSS | 0.729 | 0.670 | 0.649 | 0.818 | 0.527 | 0.543 | 0.470 | 0.480 | 0.488 | 0.530 | 0.743 | 0.485 | 0.594 |
HDCT | 0.692 | 0.718 | 0.594 | 0.740 | 0.521 | 0.451 | 0.512 | 0.471 | 0.472 | 0.554 | 0.732 | 0.499 | 0.580 |
PCA | 0.685 | 0.723 | 0.581 | 0.709 | 0.514 | 0.449 | 0.406 | 0.467 | 0.471 | 0.548 | 0.775 | 0.488 | 0.568 |
RSS | 0.549 | 0.541 | 0.485 | 0.589 | 0.442 | 0.502 | 0.430 | 0.436 | 0.502 | 0.502 | 0.548 | 0.501 | 0.502 |
CVS | 0.737 | 0.634 | 0.604 | 0.743 | 0.513 | 0.573 | 0.496 | 0.457 | 0.472 | 0.523 | 0.693 | 0.508 | 0.579 |
RWRS | 0.769 | 0.700 | 0.644 | 0.758 | 0.526 | 0.630 | 0.630 | 0.488 | 0.474 | 0.568 | 0.756 | 0.509 | 0.621 |
Proposed | 0.801 | 0.719 | 0.659 | 0.842 | 0.522 | 0.514 | 0.526 | 0.481 | 0.483 | 0.581 | 0.623 | 0.500 | 0.604 |
Methods | HIG | OFF | PED | PET | CAN | OVE | BLI | SKA | SID | TRA | SOF | STR | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ITTI | 0.452 | 0.400 | 0.501 | 0.401 | 0.382 | 0.353 | 0.453 | 0.378 | 0.252 | 0.371 | 0.473 | 0.241 | 0.388 |
SUN | 0.401 | 0.413 | 0.444 | 0.402 | 0.405 | 0.357 | 0.427 | 0.394 | 0.138 | 0.390 | 0.443 | 0.144 | 0.363 |
SSR | 0.447 | 0.513 | 0.449 | 0.401 | 0.351 | 0.375 | 0.298 | 0.436 | 0.224 | 0.303 | 0.443 | 0.132 | 0.364 |
GBVS | 0.431 | 0.454 | 0.494 | 0.413 | 0.380 | 0.357 | 0.452 | 0.390 | 0.247 | 0.373 | 0.454 | 0.242 | 0.391 |
FES | 0.377 | 0.499 | 0.455 | 0.416 | 0.256 | 0.323 | 0.477 | 0.305 | 0.079 | 0.279 | 0.464 | 0.078 | 0.334 |
QSS | 0.434 | 0.464 | 0.480 | 0.419 | 0.427 | 0.367 | 0.483 | 0.364 | 0.208 | 0.304 | 0.481 | 0.130 | 0.380 |
HDCT | 0.422 | 0.501 | 0.465 | 0.360 | 0.244 | 0.291 | 0.299 | 0.347 | 0.053 | 0.276 | 0.448 | 0.133 | 0.320 |
PCA | 0.439 | 0.528 | 0.460 | 0.401 | 0.466 | 0.294 | 0.497 | 0.286 | 0.227 | 0.363 | 0.442 | 0.150 | 0.379 |
RSS | 0.341 | 0.341 | 0.442 | 0.380 | 0.217 | 0.307 | 0.307 | 0.274 | 0.078 | 0.258 | 0.380 | 0.052 | 0.281 |
CVS | 0.501 | 0.442 | 0.475 | 0.342 | 0.237 | 0.322 | 0.290 | 0.257 | 0.046 | 0.247 | 0.463 | 0.161 | 0.315 |
RWRS | 0.468 | 0.455 | 0.522 | 0.363 | 0.225 | 0.317 | 0.317 | 0.270 | 0.068 | 0.261 | 0.463 | 0.171 | 0.325 |
Proposed | 0.499 | 0.464 | 0.658 | 0.479 | 0.390 | 0.357 | 0.344 | 0.344 | 0.149 | 0.294 | 0.459 | 0.064 | 0.375 |
Methods | HIG | OFF | PED | PET | CAN | OVE | BLI | SKA | SID | TRA | SOF | STR | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ITTI | 0.902 | 0.527 | 1.283 | 1.352 | 0.323 | 0.149 | 1.398 | 0.154 | 0.106 | 0.199 | 0.994 | 0.007 | 0.616 |
SUN | 0.483 | 0.466 | 0.414 | 0.606 | 0.099 | 0.096 | 1.317 | 0.097 | 0.002 | 0.129 | 0.769 | 0.007 | 0.374 |
SSR | 1.035 | 0.985 | 0.636 | 0.988 | 0.678 | 0.329 | 0.982 | 0.282 | 0.019 | 0.303 | 1.065 | 0.042 | 0.612 |
GBVS | 0.641 | 0.770 | 1.207 | 1.718 | 0.245 | 0.256 | 1.196 | 0.105 | 0.086 | 0.373 | 0.854 | 0.001 | 0.621 |
FES | 0.533 | 1.286 | 0.576 | 1.844 | 0.194 | 0.049 | 0.082 | 0.056 | 0.051 | 0.302 | 0.786 | 0.016 | 0.481 |
QSS | 1.030 | 0.916 | 1.239 | 1.721 | 0.629 | 0.107 | 1.856 | 0.350 | 0.007 | 0.265 | 1.212 | 0.045 | 0.781 |
HDC | 0.985 | 1.361 | 1.158 | 1.281 | 0.468 | 0.163 | 0.730 | 0.291 | 0.095 | 0.364 | 0.975 | 0.004 | 0.656 |
PCA | 0.722 | 1.206 | 0.871 | 1.178 | 0.431 | 0.121 | 1.513 | 0.258 | 0.096 | 0.373 | 1.190 | 0.041 | 0.667 |
RSS | 0.639 | 0.409 | 0.826 | 0.641 | 0.102 | 0.054 | 0.634 | 0.219 | 0.011 | 0.406 | 0.190 | 0.016 | 0.346 |
CVS | 1.324 | 0.962 | 1.164 | 1.285 | 0.342 | 0.206 | 0.633 | 0.167 | 0.121 | 0.242 | 1.025 | 0.028 | 0.625 |
RWRS | 1.448 | 0.852 | 1.725 | 1.142 | 0.475 | 0.480 | 0.480 | 0.383 | 0.084 | 0.507 | 1.196 | 0.032 | 0.734 |
Proposed | 2.158 | 1.298 | 2.245 | 3.310 | 0.505 | 0.196 | 1.094 | 0.511 | 0.071 | 0.859 | 0.991 | 0.002 | 1.103 |
Methods | HIG | OFF | PED | PET | CAN | OVE | BLI | SKA | SID | TRA | SOF | STR | Avg. |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ITTI | 0.306 | 0.169 | 0.256 | 0.193 | 0.104 | 0.013 | 0.218 | 0.057 | 0.170 | 0.094 | 0.264 | 0.031 | 0.156 |
SUN | 0.142 | 0.151 | 0.084 | 0.081 | 0.033 | 0.024 | 0.192 | 0.033 | 0.003 | 0.057 | 0.204 | 0.030 | 0.086 |
SSR | 0.299 | 0.325 | 0.121 | 0.143 | 0.166 | 0.092 | 0.163 | 0.090 | 0.031 | 0.115 | 0.288 | 0.185 | 0.168 |
GBVS | 0.231 | 0.254 | 0.244 | 0.233 | 0.089 | 0.039 | 0.191 | 0.047 | 0.138 | 0.085 | 0.230 | 0.004 | 0.149 |
FES | 0.187 | 0.428 | 0.120 | 0.244 | 0.071 | 0.008 | 0.017 | 0.028 | 0.081 | 0.124 | 0.231 | 0.071 | 0.134 |
QSS | 0.285 | 0.304 | 0.236 | 0.250 | 0.145 | 0.043 | 0.274 | 0.101 | 0.011 | 0.061 | 0.324 | 0.196 | 0.186 |
HDCT | 0.339 | 0.449 | 0.229 | 0.167 | 0.138 | 0.021 | 0.120 | 0.110 | 0.151 | 0.140 | 0.279 | 0.018 | 0.180 |
PCA | 0.238 | 0.398 | 0.176 | 0.158 | 0.127 | 0.003 | 0.246 | 0.096 | 0.153 | 0.150 | 0.331 | 0.178 | 0.188 |
RSS | 0.180 | 0.133 | 0.155 | 0.094 | 0.028 | 0.013 | 0.096 | 0.060 | 0.018 | 0.103 | 0.048 | 0.069 | 0.083 |
CVS | 0.437 | 0.314 | 0.229 | 0.175 | 0.111 | 0.078 | 0.101 | 0.060 | 0.193 | 0.082 | 0.286 | 0.121 | 0.182 |
RWRS | 0.414 | 0.277 | 0.326 | 0.179 | 0.135 | 0.124 | 0.124 | 0.119 | 0.133 | 0.144 | 0.329 | 0.138 | 0.204 |
Proposed | 0.561 | 0.426 | 0.400 | 0.457 | 0.136 | 0.074 | 0.167 | 0.136 | 0.111 | 0.230 | 0.305 | 0.010 | 0.251 |
Frame Size | Ours | Itti | SUN | SSR | GBVS | FES | QSS | CVS | HDCT | PCA | RSS | RWRS |
---|---|---|---|---|---|---|---|---|---|---|---|---|
320 × 240 px | 0.043 | 0.175 | 1.498 | 0.680 | 0.377 | 0.051 | 0.029 | 5.045 | 3.454 | 2.014 | 0.130 | 10.813 |
720 × 480 px | 0.130 | 0.217 | 8.096 | 0.841 | 0.380 | 0.112 | 0.054 | 20.98 | 7.256 | 11.308 | 0.152 | 16.636 |
Input | GT | Ours | Itti | SUN | SSR | GBVS | FES | QSS | PCA | HDCT | |
---|---|---|---|---|---|---|---|---|---|---|---|
HIG | |||||||||||
OFF | |||||||||||
PED | |||||||||||
PET | |||||||||||
CAN | |||||||||||
OVE | |||||||||||
BLI | |||||||||||
SKA | |||||||||||
SID | |||||||||||
TRA | |||||||||||
SOF |
Input | GT | Ours | RSS | CVS | RWRS | |
---|---|---|---|---|---|---|
HIG | ||||||
OFF | ||||||
PED | ||||||
PET | ||||||
CAN | ||||||
OVE | ||||||
BLI | ||||||
SKA | ||||||
SID | ||||||
TRA | ||||||
SOF |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ngo, T.-T.; Nguyen, V.; Pham, X.-Q.; Hossain, M.-A.; Huh, E.-N. Motion Saliency Detection for Surveillance Systems Using Streaming Dynamic Mode Decomposition. Symmetry 2020, 12, 1397. https://doi.org/10.3390/sym12091397
Ngo T-T, Nguyen V, Pham X-Q, Hossain M-A, Huh E-N. Motion Saliency Detection for Surveillance Systems Using Streaming Dynamic Mode Decomposition. Symmetry. 2020; 12(9):1397. https://doi.org/10.3390/sym12091397
Chicago/Turabian StyleNgo, Thien-Thu, VanDung Nguyen, Xuan-Qui Pham, Md-Alamgir Hossain, and Eui-Nam Huh. 2020. "Motion Saliency Detection for Surveillance Systems Using Streaming Dynamic Mode Decomposition" Symmetry 12, no. 9: 1397. https://doi.org/10.3390/sym12091397
APA StyleNgo, T. -T., Nguyen, V., Pham, X. -Q., Hossain, M. -A., & Huh, E. -N. (2020). Motion Saliency Detection for Surveillance Systems Using Streaming Dynamic Mode Decomposition. Symmetry, 12(9), 1397. https://doi.org/10.3390/sym12091397