Parallelization of the Honeybee Search Algorithm for Object Tracking
Abstract
:1. Introduction
2. Materials And Methods
2.1. The Honeybee Dance Language
Algorithm 1 HSA is divided into 3 phases: exploration, recruitment, and harvest. |
Honeybee Search Algorithm() |
1 Exploration() |
2 Recruitment() |
3 Harvest() |
2.2. The Honeybee Search Algorithm
Algorithm 2 Pseudocode for the exploration phase of HSA |
Exploration |
1 Initialize |
2 Evaluate |
3 while stop condition = false |
4 Generate |
5 Evaluate |
6 Sharing |
7 Select best |
Algorithm 3 Pseudocode for the harvest phase of HSA |
Harvest |
1 for each individual |
2 Initialize |
3 Evaluate |
4 while stop condition = false |
5 Generate |
6 Evaluate |
7 Sharing |
8 Select best |
2.2.1. Crossover
2.2.2. Mutation
2.3. Normalized Cross-Correlation
2.4. NCC as a Fitness Function
2.5. Workflow of Parallel Experiments
2.5.1. Hardware
2.5.2. Random Number Generation
2.5.3. GPU Kernel Parameters
2.5.4. Honeybee Search Algorithm Parallelization
- generation of the initial random population for the exploration phase
- generation of populations using mutation, crossover and random exploration
- merge and populations
- selection of the best from populations
2.6. Tests Procedure and Evaluation Criteria
2.6.1. ALOV Video Dataset
- Light: 33 videos that have sudden and intense changes in the main light source or how it illuminates the tracked object.
- Surface Cover: 15 videos where the tracked object changes its surface cover, but this cover adopts the form of the object it covers.
- Specularity: 18 videos with shiny objects that reflect light and produce specularities.
- Transparency: 20 videos where the tracked object is transparent and easily confused with the background.
- Shape: 24 videos in which the tracked object drastically changes its shape.
- Motion Smoothness: 22 videos that show an object that moves so slowly that no movement is detected whatsoever.
- Motion Coherence: 12 videos where the tracked object does not follow a predictable route of movement.
- Clutter: 15 videos with tracked objects that display similar patterns to the ones observed in the background.
- Confusion: 37 videos that show objects that are very similar to the object of interest, thus confusing the tracker.
- Low Contrast: 23 videos where the tracked object shows little contrast with the background or other objects.
- Occlusion: 34 videos where the object of interest is occluded by other objects or is not in the field of vision at a certain point.
- Moving Camera: 22 videos that are affected by the camera’s sudden movements.
- Zooming Camera: 29 videos where the zoom of the camera changes the displayed size of the object.
- Long Duration: 10 videos that have greater duration, between one and two minutes.
2.6.2. F-Score
2.6.3. Average Time per Frame
2.6.4. Score-Time Efficiency
3. Results
3.1. Polynomial Time Complexity
3.2. Sequential Exhaustive NCC vs. Parallel Exhaustive NCC
3.3. Reported NCC and Other Trackers versus This Work’s NCC
3.4. Adjusting the Honeybee Search Algorithm
3.5. GPU Exhaustive vs. GPU with Honeybee Search Algorithm
3.6. Tests with Gaussian Noise
3.7. Our Best Tracker Proposal versus Two Recent Trackers
4. Discussion
5. Conclusions
5.1. Future Work
- Perform tests with different datasets. Since the ALOV dataset is mainly focused on accuracy. An interesting dataset could be the object tracking benchmark of Kristan et al. [64]. Also, use state-of-the-art trackers and other well-known traditional trackers for testing. For instance, Struck [45] and SiamMask [10] are currently considered some of the most accurate and fast trackers [3,64].
- Perform tests with different parallel computing or hardware acceleration technologies. The synchronization required by HSA could be provided by another architecture such as the FPGA [65], which would allow designing a parallel computing architecture with custom capabilities.
- Comparisons with other Swarm Intelligence heuristics. This work used HSA, which could be compared against other similar heuristics, such as Particle Swarm Optimization or Artificial Bee Colony.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Roy, A.; Chattopadhyay, P.; Sural, S.; Mukherjee, J.; Rigoll, G. Modelling, synthesis and characterisation of occlusion in videos. IET Comput. Vis. 2015, 9, 821–830. [Google Scholar] [CrossRef]
- Tesfaye, Y.T.; Zemene, E.; Pelillo, M.; Prati, A. Multi-object tracking using dominant sets. IET Comput. Vis. 2016, 10, 289–298. [Google Scholar] [CrossRef]
- Smeulders, A.W.; Chu, D.M.; Cucchiara, R.; Calderara, S.; Dehghan, A.; Shah, M. Visual tracking: An experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1442–1468. [Google Scholar]
- Wu, Y.; Lim, J.; Yang, M.H. Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1834–1848. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Galoogahi, H.K.; Fagg, A.; Huang, C.; Ramanan, D.; Lucey, S. Need for speed: A benchmark for higher frame rate object tracking. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: Hoboken, NJ, USA, 2017; pp. 1134–1143. [Google Scholar]
- Olague, G.; Hernández, D.E.; Llamas, P.; Clemente, E.; Briseño, J.L. Brain programming as a new strategy to create visual routines for object tracking. Multimed. Tools Appl. 2019, 78, 5881–5918. [Google Scholar] [CrossRef]
- Olague, G.; Hernández, D.E.; Clemente, E.; Chan-Ley, M. Evolving Head Tracking Routines With Brain Programming. IEEE Access 2018, 6, 26254–26270. [Google Scholar] [CrossRef]
- Yazdi, M.; Bouwmans, T. New trends on moving object detection in video images captured by a moving camera: A survey. Comput. Sci. Rev. 2018, 28, 157–177. [Google Scholar] [CrossRef]
- Björkman, M.; Bergström, N.; Kragic, D. Detecting, segmenting and tracking unknown objects using multi-label MRF inference. Comput. Vis. Image Underst. 2014, 118, 111–127. [Google Scholar] [CrossRef] [Green Version]
- Wang, Q.; Zhang, L.; Bertinetto, L.; Hu, W.; Torr, P.H. Fast online object tracking and segmentation: A unifying approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1328–1338. [Google Scholar]
- Lu, S.; Yu, X.; Li, R.; Zong, Y.; Wan, M. Passive cavitation mapping using dual apodization with cross-correlation in ultrasound therapy monitoring. Ultrason. Sonochem. 2019, 54, 18–31. [Google Scholar] [CrossRef]
- Yang, X.; Deka, S.; Righetti, R. A hybrid CPU-GPGPU approach for real-time elastography. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2011, 58, 2631–2645. [Google Scholar] [CrossRef]
- Idzenga, T.; Gaburov, E.; Vermin, W.; Menssen, J.; De Korte, C.L. Fast 2-D ultrasound strain imaging: The benefits of using a GPU. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2014, 61, 207–213. [Google Scholar] [CrossRef] [PubMed]
- Olague, G.; Puente, C. The honeybee search algorithm for three-dimensional reconstruction. In Proceedings of the Workshop on Applications of Evolutionary Computation, Budapest, Hungary, 10–12 April 2006; Springer: Berlin/Heidelberg, Germany, 2006; pp. 427–437. [Google Scholar]
- Olague, G.; Puente, C. Parisian evolution with honeybees for three-dimensional reconstruction. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Seattle, WA, USA, 8–12 July 2006; ACM: New York, NY, USA, 2006; pp. 191–198. [Google Scholar] [CrossRef]
- Olague, G.; Puente, C. Honeybees as an intelligent based approach for 3D reconstruction. In Proceedings of the 18th International Conference on Pattern Recognition (ICPR’06), Hong Kong, China, 20–24 August 2006; IEEE: Hoboken, NJ, USA, 2006; Volume 1, pp. 1116–1119. [Google Scholar] [CrossRef] [Green Version]
- Olague, G. Evolutionary Computer Vision: The First Footprints; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Simonite, T. Virtual Bees Help Robots See in 3D. Available online: https://www.google.com.mx/amp/s/www.newscientist.com/article/dn10129-virtual-bees-help-robots-see-in-3d/amp/ (accessed on 20 December 2019).
- Tan, Y.; Ding, K. A survey on GPU-based implementation of swarm intelligence algorithms. IEEE Trans. Cybern. 2016, 46, 2028–2041. [Google Scholar] [CrossRef] [PubMed]
- Kalivarapu, V.; Winer, E. Implementation of digital pheromones in PSO accelerated by commodity graphics hardware. In Proceedings of the 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Victoria, BC, Canada, 10–12 September 2008; pp. 1–17. [Google Scholar]
- Hsieh, H.T.; Chu, C.H. Particle swarm optimisation (PSO)-based tool path planning for 5-axis flank milling accelerated by graphics processing unit (GPU). Int. J. Comput. Integr. Manuf. 2011, 24, 676–687. [Google Scholar] [CrossRef]
- Tsutsui, S.; Fujimoto, N. ACO with tabu search on a GPU for solving QAPs using move-cost adjusted thread assignment. In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, Dublin, Ireland, 12–16 July 2011; ACM: New York, NY, USA, 2011; pp. 1547–1554. [Google Scholar]
- Calazan, R.M.; Nedjah, N.; de Macedo Mourelle, L. Three alternatives for parallel GPU-based implementations of high performance particle swarm optimization. In Proceedings of the International Work-Conference on Artificial Neural Networks, Puerto de la Cruz, Tenerife, Spain, 12–14 June 2013; Springe: Berlin/Heidelberg, Germany, 2013; pp. 241–252. [Google Scholar]
- Souza, D.L.; Teixeira, O.N.; Monteiro, D.C.; de Oliveira, R.C.L. A new cooperative evolutionary multi-swarm optimizer algorithm based on CUDA parallel architecture applied to solve engineering optimization problems. In Proceedings of the 3rd International Workshop on Combinations of Intelligent Methods and Applications (CIMA 2012), Montpellier, France, 27–31 August 2012; p. 49. [Google Scholar]
- Rohilla, R.; Sikri, V.; Kapoor, R. Spider monkey optimisation assisted particle filter for robust object tracking. IET Comput. Vis. 2016, 11, 207–219. [Google Scholar] [CrossRef]
- Naiel, M.A.; Ahmad, M.O.; Swamy, M.; Lim, J.; Yang, M.H. Online multi-object tracking via robust collaborative model and sample selection. Comput. Vis. Image Underst. 2017, 154, 94–107. [Google Scholar] [CrossRef]
- Kang, K.; Bae, C.; Yeung, H.W.F.; Chung, Y.Y. A hybrid gravitational search algorithm with swarm intelligence and deep convolutional feature for object tracking optimization. Appl. Soft Comput. 2018, 66, 319–329. [Google Scholar] [CrossRef]
- Crist, E. Can an insect speak? The case of the honeybee dance language. Soc. Stud. Sci. 2004, 34, 7–43. [Google Scholar] [CrossRef] [Green Version]
- Deb, K.; Beyer, H.G. Self-adaptive genetic algorithms with simulated binary crossover. Evol. Comput. 2001, 9, 197–221. [Google Scholar] [CrossRef]
- Boumaza, A.M.; Louchet, J. Dynamic flies: Using real-time parisian evolution in robotics. In Proceedings of the Workshop on Applications of Evolutionary Computation, Como, Italy, 18–20 April 2011; Springer: Berlin/Heidelberg, Germany, 2001; pp. 288–297. [Google Scholar]
- Goldberg, D.E.; Richardson, J. Genetic algorithms with sharing for multimodal function optimization. In Genetic Algorithms and Their Applications: Proceedings of the Second International Conference on Genetic Algorithms; Lawrence Erlbaum: Hillsdale, NJ, USA, 1987; pp. 41–49. [Google Scholar]
- Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2001; Volume 16. [Google Scholar]
- Lewis, J.P. Fast Normalized Cross-Correlation; Vision Interface: Quebec, QC, Canada, 1995; Volume 10, pp. 120–123. [Google Scholar]
- Di Stefano, L.; Mattoccia, S.; Tombari, F. ZNCC-based template matching using bounded partial correlation. Pattern Recognit. Lett. 2005, 26, 2129–2134. [Google Scholar] [CrossRef]
- Bätz, M.; Richter, T.; Garbas, J.U.; Papst, A.; Seiler, J.; Kaup, A. High dynamic range video reconstruction from a stereo camera setup. Signal Process. Image Commun. 2014, 29, 191–202. [Google Scholar] [CrossRef]
- Mantour, M. AMD Radeon HD 7970 with graphics core next (GCN) architecture. In Proceedings of the Hot Chips 24 Symposium (HCS), Cupertino, CA, USA, 27–29 August 2012; pp. 1–35. [Google Scholar]
- Gaster, B.; Howes, L.; Kaeli, D.R.; Mistry, P.; Schaa, D. Heterogeneous Computing with OpenCL: Revised OpenCL 1; Newnes; Morgan Kaufman: San Francisco, CA, USA, 2012. [Google Scholar]
- Deb, K. Multi-Objective Optimization using Evolutionary Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
- Khare, V.; Yao, X.; Deb, K. Performance scaling of multi-objective evolutionary algorithms. In Evolutionary Multi-Criterion Optimization; Springer: Berlin/Heidelberg, Germany, 2003; p. 72. [Google Scholar]
- Davidson, A.; Tarjan, D.; Garland, M.; Owens, J.D. Efficient parallel merge sort for fixed and variable length keys. In Proceedings of the Innovative Parallel Computing (InPar), San Jose, CA, USA, 13–14 May 2012; pp. 1–9. [Google Scholar]
- Lawless, J.F. Statistical Models and Methods for Lifetime Data; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 362. [Google Scholar]
- Collett, D. Modelling Survival Data in Medical Research; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms; MIT Press: Boston, MA, USA, 2009. [Google Scholar]
- Puddu, L. ALOV300++ Dataset. Available online: http://alov300pp.joomlafree.it/ (accessed on 20 December 2019).
- Hare, S.; Golodetz, S.; Saffari, A.; Vineet, V.; Cheng, M.M.; Hicks, S.L.; Torr, P.H. Struck: Structured output tracking with kernels. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 2096–2109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Nguyen, H.T.; Smeulders, A.W. Robust tracking using foreground-background texture discrimination. Int. J. Comput. Vis. 2006, 69, 277–293. [Google Scholar] [CrossRef]
- Kalal, Z.; Matas, J.; Mikolajczyk, K. PN learning: Bootstrapping binary classifiers by structural constraints. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 49–56. [Google Scholar]
- Maggio, E.; Cavallaro, A. Video Tracking: Theory and Practice; John Wiley & Sons: Hoboken, NJ, USA, 2011. [Google Scholar]
- Baker, S.; Matthews, I. Lucas-kanade 20 years on: A unifying framework. Int. J. Comput. Vis. 2004, 56, 221–255. [Google Scholar] [CrossRef]
- Nguyen, H.T.; Smeulders, A.W. Fast occluded object tracking by a robust appearance filter. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1099–1104. [Google Scholar] [CrossRef] [PubMed]
- Adam, A.; Rivlin, E.; Shimshoni, I. Robust fragments-based tracking using the integral histogram. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; Volume 1, pp. 798–805. [Google Scholar]
- Comaniciu, D.; Ramesh, V.; Meer, P. Real-time tracking of non-rigid objects using mean shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, 15 June 2000; Volume 2, pp. 142–149. [Google Scholar]
- Oron, S.; Bar-Hillel, A.; Levi, D.; Avidan, S. Locally orderless tracking. Int. J. Comput. Vis. 2015, 111, 213–228. [Google Scholar] [CrossRef]
- Ross, D.A.; Lim, J.; Lin, R.S.; Yang, M.H. Incremental learning for robust visual tracking. Int. J. Comput. Vis. 2008, 77, 125–141. [Google Scholar] [CrossRef]
- Kwon, J.; Lee, K.M.; Park, F.C. Visual tracking via geometric particle filtering on the affine group with optimal importance functions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 991–998. [Google Scholar]
- Kwon, J.; Lee, K.M. Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1208–1215. [Google Scholar]
- Čehovin, L.; Kristan, M.; Leonardis, A. An adaptive coupled-layer visual model for robust visual tracking. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1363–1370. [Google Scholar]
- Mei, X.; Ling, H. Robust visual tracking using ℓ1-minimization. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 1436–1443. [Google Scholar]
- Mei, X.; Ling, H.; Wu, Y.; Blasch, E.; Bai, L. Minimum Error Bounded Efficient L1 Tracker with Occlusion Detection; Technical report; Air Force Research Lab Wright-Patterson AFB OH: Wright-Patterson, OH, USA, 2011; Preprint. [Google Scholar]
- Godec, M.; Roth, P.M.; Bischof, H. Hough-based tracking of non-rigid objects. Comput. Vis. Image Underst. 2013, 117, 1245–1256. [Google Scholar] [CrossRef]
- Wang, S.; Lu, H.; Yang, F.; Yang, M.H. Superpixel tracking. In Proceedings of the 2011 IEEE International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 1323–1330. [Google Scholar]
- Babenko, B.; Yang, M.H.; Belongie, S. Visual tracking with online multiple instance learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 July 2009; pp. 983–990. [Google Scholar]
- Nataraj, L.; Sarkar, A.; Manjunath, B.S. Adding gaussian noise to “denoise” JPEG for detecting image resizing. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 1493–1496. [Google Scholar]
- Kristan, M.; Matas, J.; Leonardis, A.; Felsberg, M.; Pflugfelder, R.; Kamarainen, J.K.; Cehovin Zajc, L.; Drbohlav, O.; Lukezic, A.; Berg, A.; et al. The seventh visual object tracking vot2019 challenge results. In Proceedings of the IEEE International Conference on Computer Vision Workshops, Seoul, Korea, 27–28 October 2019. [Google Scholar]
- Nou-Shene, T.; Pudi, V.; Sridharan, K.; Thomas, V.; Arthi, J. Very large-scale integration architecture for video stabilisation and implementation on a field programmable gate array-based autonomous vehicle. IET Comput. Vis. 2015, 9, 559–569. [Google Scholar] [CrossRef]
Parameter | Description | BEE | NO BEE |
---|---|---|---|
Size of all populations | 64 | – | |
Generations | – | 2 | – |
Parameter for mutation | 25 | – | |
Parameter for crossover | 2 | – | |
Mutation sons, exploration | 60% | – | |
Crossover sons, exploration | 10% | – | |
Random sons, exploration | 30% | – | |
Mutation sons, harvest | 60% | – | |
Crossover sons, harvest | 30% | – | |
Random sons, harvest | 10% | – |
Implementation | Abbreviation | Time Complexity |
---|---|---|
Sequential exhaustive NCC | CPU NO BEE | |
Parallel exhaustive NCC | GPU NO BEE | |
Sequential HSA NCC | CPU BEE | |
Parallel HSA NCC | GPU BEE |
Average | F-Score * | F-Score * | |
---|---|---|---|
Time per Frame | Average | Deviation | |
[spf] | [arb. unit] | [arb. unit] | |
CPU NO BEE | 202.1399 | 0.5453 | 0.3056 |
GPU NO BEE | 1.1823 | 0.5051 | 0.3327 |
Both | 101.6611 | 0.5252 | 0.3193 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Perez-Cham, O.E.; Puente, C.; Soubervielle-Montalvo, C.; Olague, G.; Aguirre-Salado, C.A.; Nuñez-Varela, A.S. Parallelization of the Honeybee Search Algorithm for Object Tracking. Appl. Sci. 2020, 10, 2122. https://doi.org/10.3390/app10062122
Perez-Cham OE, Puente C, Soubervielle-Montalvo C, Olague G, Aguirre-Salado CA, Nuñez-Varela AS. Parallelization of the Honeybee Search Algorithm for Object Tracking. Applied Sciences. 2020; 10(6):2122. https://doi.org/10.3390/app10062122
Chicago/Turabian StylePerez-Cham, Oscar E., Cesar Puente, Carlos Soubervielle-Montalvo, Gustavo Olague, Carlos A. Aguirre-Salado, and Alberto S. Nuñez-Varela. 2020. "Parallelization of the Honeybee Search Algorithm for Object Tracking" Applied Sciences 10, no. 6: 2122. https://doi.org/10.3390/app10062122
APA StylePerez-Cham, O. E., Puente, C., Soubervielle-Montalvo, C., Olague, G., Aguirre-Salado, C. A., & Nuñez-Varela, A. S. (2020). Parallelization of the Honeybee Search Algorithm for Object Tracking. Applied Sciences, 10(6), 2122. https://doi.org/10.3390/app10062122