Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images
Abstract
:Simple Summary
Abstract
1. Introduction
- A method named Domain-Aware Neural Architecture Search (DANAS) was developed regarding the domain knowledge of camera trap images. The proposed method directly searches networks on camera trap images, thus avoiding negative effects such as the domain shift incurred by benchmark datasets in conventional search methods.
- Aspect ratios of camera trap images are maintained during the search. As part of domain knowledge, the changes of aspect ratio may not be automatically tackled by neural networks. Therefore, the changes are manually eliminated by first finding the most frequent aspect ratio and then padding images whose aspect ratios differ from the most frequent one.
- A loss function was derived to guide DANAS to find lightweight networks applicable for edge devices. A theoretical analysis of the proposed loss function was conducted, and the analysis revealed the value of hyper parameter in the loss function to boost its guiding effect on the search.
2. Materials and Methods
2.1. Datasets
2.2. Method
3. Results
3.1. Search and Test on NACTI-a
3.2. Search and Test on MCTI
3.3. Tests on Jetson X2
3.4. Comparisons between DANAS and other Search Methods
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A
References
- Zhu, C.; Thomas, H.; Li, G. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops, Venice, Italy, 22–29 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2860–2864. [Google Scholar]
- Schneider, S.; Taylor, G.W.; Kremer, S. Deep learning object detection methods for ecological camera trap data. In Proceedings of the 15th Conference on Computer and Robot Vision, Toronto, ON, Canada, 9–11 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 321–328. [Google Scholar]
- Castelblanco, L.P.; Narváez, C.I.; Pulido, A.D. Methodology for mammal classification in camera trap images. In Proceedings of the 9th International Conference on Machine Vision, Nice, France, 24–27 February 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–7. [Google Scholar]
- Randler, C.; Katzmaier, T.; Kalb, J.; Kalb, N.; Gottschalk, T.K. Baiting/Luring improves detection probability and species identification-a case study of mustelids with camera traps. Animals 2020, 10, 2178. [Google Scholar] [CrossRef] [PubMed]
- Moore, H.A.; Champney, J.L.; Dunlop, J.A.; Valentine, L.E.; Nimmo, D.G. Spot on: Using camera traps to individually monitor one of the world’s largest lizards. Wildl. Res. 2020, 47, 326–337. [Google Scholar] [CrossRef]
- Tabak, M.A.; Norouzzadeh, M.S.; Wolfson, D.W.; Newton, E.J.; Boughton, R.K.; Ivan, J.S.; Odell, E.A.; Newkirk, E.S.; Conrey, R.Y.; Stenglein, J.; et al. Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2. Ecol. Evol. 2020, 10, 10374–10383. [Google Scholar] [CrossRef] [PubMed]
- Yousif, M.; Yuan, J.; Kays, R.; He, Z. Animal scanner: Software for classifying humans, animals, and empty frames in camera trap images. Ecol. Evol. 2019, 9, 1578–1589. [Google Scholar] [CrossRef]
- Janzen, M.; Visser, K.; Visscher, D.; MacLeod, I.; Vujnovic, D.; Vujnovic, K.; Vujnovic, K. Semi-automated camera trap image processing for the detection of ungulate fence crossing events. Environ. Monit. Assess. 2017, 189, 1–13. [Google Scholar] [CrossRef]
- Nguyen, H.; Maclagan, S.J.; Nguyen, T.D.; Nguyen, T.; Flemons, P.; Andrews, K.; Ritchie, E.G.; Phung, D. Animal recognition and identification with deep convolutional neural networks for automated wildlife monitoring. In Proceedings of the 2017 IEEE International Conference on Data Science and Advanced Analytics, Tokyo, Japan, 19–21 October 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 40–49. [Google Scholar]
- Norouzzadeh, M.S.; Nguyen, A.; Kosmala, M.; Swanson, A.; Palmer, M.S.; Packer, C.; Clune, J. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning. Proc. Natl. Acad. Sci. USA 2018, 115, 5716–5725. [Google Scholar] [CrossRef] [Green Version]
- Miao, Z.; Gaynor, K.M.; Wang, J.; Liu, Z.; Muellerklein, O.; Norouzzadeh, M.S.; McInturff, A.; Bowie, R.C.K.; Nathan, R.; Yu, S.X.; et al. Insights and approaches using deep learning to classify wildlife. Sci. Rep. 2019, 9, 1–9. [Google Scholar] [CrossRef]
- Villa, A.G.; Salazar, A.; Vargas, F. Towards automatic wild animal monitoring: Identification of animal species in camera-trap images using very deep convolutional neural networks. Ecol. Inform. 2017, 41, 24–32. [Google Scholar] [CrossRef] [Green Version]
- Tabak, M.A.; Norouzzadeh, A.; Wolfson, F.; Sweeney, S.J.; Vercauteren, K.C.; Snow, N.P.; Halseth, J.M.; di Salvo, P.A.; Lewis, J.S.; White, M.D.; et al. Machine learning to classify animal species in camera trap images: Applications in ecology. Methods Ecol. Evol. 2018, 10, 585–590. [Google Scholar] [CrossRef] [Green Version]
- Norouzzadeh, M.S.; Morris, D.; Beery, S.; Joshi, N.; Jojic, N.; Clune, J. A deep active learning system for species identification and counting in camera trap images. Methods Ecol. Evol. 2020, 12, 150–161. [Google Scholar] [CrossRef]
- Follmann, P.; Radig, B. Detecting animals in infrared images from camera-traps. Pattern Recognit. Image Anal. 2018, 28, 605–611. [Google Scholar] [CrossRef]
- Chen, J.; Ran, X. Deep learning with edge computing: A review. Proc. IEEE 2019, 107, 1655–1674. [Google Scholar] [CrossRef]
- Elias, A.R.; Golubovic, N.; Krintz, C. Where’s the bear? Automating wildlife image processing using IoT and edge cloud systems. In Proceedings of the 2017 IEEE/ACM Second International Conference on Internet-of-Things Design and Implementation, Pittsburgh, PA, USA, 18–21 April 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 247–258. [Google Scholar]
- Zualkernan, I.A.; Dhou, S.; Judas, J.; Sajun, A.R.; Gomez, B.R.; Hussain, L.A.; Sakhnini, D. Towards an IoT-based deep learning architecture for camera trap image classification. In Proceedings of the 2020 IEEE Global Conference on Artificial Intelligence and Internet of Things, Dubai, United Arab Emirates, 12–16 December 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 111–116. [Google Scholar]
- Wei, W.; Luo, G.; Ran, J.; Li, J. Zilong: A tool to identify empty images in camera-trap data. Ecol. Inform. 2020, 55, 101021. [Google Scholar] [CrossRef]
- Whytock, R.C.; Świeżewski, J.; Zwerts, J.A.; Bara-Słupski, T.; Pambo, A.F.K.; Rogala, M.; Bahaa-el-din, L.; Boekee, K.; Brittain, S.; Cardoso, A.W.; et al. Robust ecological analysis of camera trap data labelled by a machine learning model. Methods Ecol. Evol. 2021, 12, 1080–1092. [Google Scholar] [CrossRef]
- Tekeli, U.; Bastanlar, Y. Elimination of useless images from raw camera-trap data. Turk. J. Electr. Eng. Comput. Sci. 2019, 27, 2395–2411. [Google Scholar] [CrossRef]
- Xing, Y.; Seferoglu, H. Predictive edge computing with hard deadlines. In Proceedings of the 24th IEEE International Symposium on Local and Metropolitan Area Networks, Washington, DC, USA, 25–27 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 13–18. [Google Scholar]
- Ulker, B.; Stuijk, H.; Corporaal, M.; Wijnhoven, R. Reviewing inference performance of state-of-the-art deep learning frameworks. In Proceedings of the 23rd International Workshop on Software and Compilers for Embedded Systems, St. Goar, Germany, 25–26 May 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 48–53. [Google Scholar]
- Li, Y.H.; Liu, J.; Wang, L.L. Lightweight network research based on deep learning: A review. In Proceedings of the 37th Chinese Control Conference, Wuhan, China, 25–27 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 9021–9026. [Google Scholar]
- Zhou, Y.; Chen, S.; Wang, Y.; Huan, W. Review of research on lightweight convolutional neural networks. In Proceedings of the 5th IEEE Information Technology and Mechatronics Engineering Conference, Chongqing, China, 12–14 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1713–1720. [Google Scholar]
- Zhong, Z. Deep Neural Network Architecture: From Artificial Design to Automatic Learning. PhD Thesis, University of Chinese Academy of Sciences, Beijing, China, 2019. [Google Scholar]
- Jaafra, Y.; Laurent, J.L.; Deruyver, A.; Naceur, M.S. Reinforcement learning for neural architecture search: A review. Image Vis. Comput. 2019, 89, 57–66. [Google Scholar] [CrossRef]
- Shang, Y. The Electrical Engineering Handbook, 1st ed.; Academic Press: Burlington, ON, Canada; Academic Press: Cambridge, MA, USA, 2005; pp. 367–377. [Google Scholar]
- Ren, P.; Xiao, Y.; Chang, X.; Huang, P.; Li, Z.; Chen, X.; Wang, X. A comprehensive survey of neural architecture search: Challenges and solutions. ACM Comput. Surv. 2021, 54, 76. [Google Scholar] [CrossRef]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. Available online: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf (accessed on 5 January 2022).
- Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F.; Deng, W. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 248–255. [Google Scholar]
- Novotny, D.; Larlus, D.; Vedaldi, A. I have seen enough: Transferring parts across categories. In Proceedings of the British Machine Vision Conference, York, UK, 19–20 September 2016; p. 115. [Google Scholar]
- Zhang, Z.; He, Z.; Cao, G.; Cao, W. Animal detection from highly cluttered natural scenes using spatiotemporal object region proposals and patch verification. IEEE Trans. Multimed. 2016, 18, 2079–2092. [Google Scholar] [CrossRef]
- Wang, M.; Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 2018, 312, 135–153. [Google Scholar] [CrossRef] [Green Version]
- Zoph, B.; Vasudevan, V.; Shlens, J.; Le, Q.V. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 8697–8710. [Google Scholar]
- Zoph, B.; Le, Q. Neural architecture search with reinforcement learning. In Proceedings of the 5th International Conference on Learning Representations, Singapore, 15–18 June 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–16. [Google Scholar]
- Hochreiter, S.; Schmidhuber, J. Long short term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Pham, H.; Guan, M.; Zoph, B.; Dean, J. Efficient neural architecture search via parameters sharing. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 4095–4101. [Google Scholar]
- Dong, X.; Yang, Y. Nas-bench-201: Extending the scope of reproducible neural architecture search. In Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 26–30 April 2020; pp. 1–16. [Google Scholar]
- Lin, M.; Chen, Q.; Yan, S. Network in network. In Proceedings of the 2nd International Conference on Learning Representations, Banff, AB, Canada, 14–16 April 2014; pp. 1–10. [Google Scholar]
- Chollet, F. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1800–1807. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Nair, V.; Hinton, G.E. Rectified linear units improve restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; Omnipress: Madison, WI, USA, 2010; pp. 807–814. [Google Scholar]
- Jia, L.; Tian, Y.; Zhang, J. Identifying Animals in Camera Trap Images via Neural Architecture Search. Comput. Intell. Neurosci. 2022, 2022, 1–15. [Google Scholar] [CrossRef]
- Williams, R.J. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 1992, 8, 229–256. [Google Scholar] [CrossRef] [Green Version]
- Yates, R.C. Curves and Their Properties, 1st ed.; National Council for Teachers of Mathematics: Reston, VA, USA, 1974; pp. 237–238. [Google Scholar]
- Reddi, S.J.; Kale, S.; Kumar, S. On the convergence of Adam and beyond. In Proceedings of the 6th International Conference on Learning Representations, Vancouver, DC, Canada, 30 April–3 May 2018; pp. 1–23. [Google Scholar]
- Sandler, M.; Howard, A.G.; Zhu, M.; Zhmoginov, A.; Chen, L. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 4510–4520. [Google Scholar]
- Tan, M.; Le, Q. EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
- Huang, G.; Liu, Z.; Maaten, L.V.D.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2261–2269. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Puerto Rico, FL, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1063–6919. [Google Scholar]
- Zagoruyko, S.; Komodakis, N. Wide residual networks. In Proceedings of the British Machine Vision Conference, York, UK, 19–22 September 2016; BMVA Press: London, UK, 2016; p. 87. [Google Scholar]
- Loshchilov, I.; Hutter, F. SGDR: Stochastic gradient descent with warm restarts. In Proceedings of the 5th International Conference on Learning Representations, Singapore, 15–18 June 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–16. [Google Scholar]
- Sutskever, I.; Martens, J.; Dahl, G.E.; Hinton, G. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 1139–1147. [Google Scholar]
- Real, E.; Aggarwal, A.; Huang, Y.; Le, Q.V. Regularized evolution for image classifier architecture search. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; pp. 4780–4789. [Google Scholar]
- Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
- Falkner, S.; Klein, A.; Hutter, F. BOHB: Robust and efficient hyperparameter optimization at scale. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 1–10. [Google Scholar]
Species in NACTI-a 1 | 2048 × 1536 (4:3) | 1920 × 1080 (16:9) | 2616 × 1472 (16:9) | Species in MCTI | 2048 × 1536 (4:3) | 1920 × 1080 (16:9) |
---|---|---|---|---|---|---|
Black bear 2 | 2420/534 | 10/1 | Agouti | 499/107 | 279/65 | |
Marten 2 | 72/16 | Bird | 584/120 | 70/20 | ||
Red squirrel 2 | 313/75 | Coiban Agouti | 1135/245 | 18/2 | ||
Jackrabbit 3 | 594/135 | 55/8 | Collared Peccary | 372/83 | 398/85 | |
Bobcat | 2040/453 | 15/2 | Opossum | 454/94 | 295/73 | |
California quail | 277/60 | European Hare | 578/122 | |||
Cougar | 2380/527 | Great Tinamou | 681/148 | 380/66 | ||
Coyote | 1416/322 | 55/14 | 6/1 | Mouflon | 1940/425 | |
Gray squirrel | 811/186 | 1/0 | Ocelot | 256/64 | 184/35 | |
Elk | 1754/393 | 8/1 | Paca | 772/162 | 200/62 | |
Gray fox | 1253/279 | 5/2 | Red Brocket Deer | 425/94 | 384/78 | |
Moose | 978/216 | Red Deer | 2321/509 | |||
Mule deer | 1761/397 | Red Fox | 410/91 | |||
Armadillo 4 | 521/113 | Red Squirrel | 343/78 | 182/36 | ||
Raccoon | 1126/250 | Roe Deer | 1038/233 | |||
Red deer | 1754/374 | Spiny Rat | 383/91 | 201/37 | ||
Red fox | 266/59 | White-nosed Coati | 883/192 | 179/41 | ||
Snowshoe hare | 1183/263 | White-tailed Deer | 1363/287 | 452/106 | ||
Striped skunk | 1080/243 | Wild Boar | 1538/345 | |||
Virginia opossum | 91/19 | Wood Mouse | 1105/245 | |||
Wild boar | 1548/340 | 4/2 | ||||
Wild turkey | 643/155 | 17/0 |
Species or Parameter Number | DANAS (Ours) | MobileNet-v2 [48] | EfficientNet [49] | DenseNet [50] | Resnet-18 [51] | ResNext [52] | Wide_ResNet [53] | Random Search |
---|---|---|---|---|---|---|---|---|
Para. num. | 1.36 | 2.25 | 4.04 | 6.98 | 11.19 | 23.02 | 66.88 | 0.52 |
Black bear 1 | 98.32 | 97.57 | 96.07 | 99.44 | 98.13 | 98.88 | 98.88 | 98.69 |
Marten 1 | 25.00 | 6.25 | 25.00 | 62.50 | 37.50 | 31.25 | 37.50 | 0.00 |
Red squirrel1 | 98.67 | 97.33 | 100 | 96.00 | 100 | 33.33 | 20.00 | 100 |
Jackrabbit 2 | 99.30 | 99.30 | 98.60 | 99.30 | 100 | 99.30 | 98.60 | 98.60 |
Bobcat | 97.58 | 96.92 | 96.26 | 97.36 | 96.26 | 96.48 | 95.60 | 97.14 |
Quail 3 | 96.67 | 95.00 | 98.33 | 96.67 | 100 | 90.00 | 83.33 | 96.67 |
Cougar | 98.10 | 96.20 | 96.20 | 99.05 | 98.48 | 95.26 | 95.83 | 97.53 |
Coyote | 95.55 | 94.07 | 95.85 | 92.88 | 93.77 | 81.90 | 78.34 | 93.18 |
Gray squirrel 4 | 100 | 97.31 | 97.85 | 96.24 | 98.92 | 93.01 | 97.85 | 96.77 |
Elk | 99.24 | 99.24 | 97.97 | 99.75 | 99.49 | 98.98 | 98.98 | 99.49 |
Gray fox | 99.64 | 97.15 | 96.09 | 98.22 | 97.86 | 97.51 | 97.15 | 98.58 |
Moose | 96.76 | 96.76 | 95.83 | 93.52 | 95.83 | 58.80 | 62.96 | 95.37 |
Mule deer | 98.49 | 97.48 | 96.98 | 98.49 | 98.24 | 94.71 | 94.96 | 98.49 |
Armadillo 5 | 100 | 98.23 | 100 | 100 | 97.35 | 97.35 | 100 | 100 |
Raccoon | 99.20 | 96.80 | 97.20 | 98.00 | 98.00 | 96.00 | 93.20 | 97.20 |
Red deer | 92.25 | 91.98 | 91.44 | 95.19 | 95.72 | 87.97 | 86.36 | 92.78 |
Red fox | 62.30 | 62.30 | 60.66 | 75.41 | 62.30 | 40.98 | 36.07 | 47.54 |
Hare 6 | 99.62 | 98.86 | 97.34 | 96.96 | 98.48 | 98.48 | 97.34 | 98.10 |
Skunk 7 | 99.18 | 99.18 | 98.77 | 99.59 | 100 | 98.77 | 98.35 | 99.59 |
Opossum 8 | 94.74 | 89.47 | 89.47 | 94.74 | 100 | 94.74 | 94.74 | 94.74 |
Wild boar | 95.32 | 96.78 | 95.61 | 96.49 | 97.08 | 86.84 | 87.13 | 94.44 |
Wild turkey | 98.06 | 96.13 | 98.71 | 99.35 | 99.35 | 87.74 | 82.58 | 96.77 |
Average | 92.91 | 90.92 | 91.83 | 94.78 | 93.76 | 84.47 | 83.44 | 90.53 |
Species or Parameter Number | DANAS (Ours) | MobileNet-v2 [48] | EfficientNet [49] | DenseNet [50] | Resnet-18 [51] | ResNext [52] | Wide_ResNet [53] | Random Search |
---|---|---|---|---|---|---|---|---|
Para. num. | 1.43 | 2.25 | 4.04 | 6.98 | 11.19 | 23.02 | 66.88 | 0.70 |
Agouti | 91.86 | 91.86 | 94.19 | 91.86 | 93.60 | 93.02 | 86.05 | 83.72 |
Bird | 97.02 | 88.10 | 87.50 | 91.07 | 92.26 | 89.29 | 92.26 | 88.69 |
Agouti 1 | 97.77 | 86.16 | 86.61 | 92.41 | 87.05 | 90.18 | 91.96 | 92.86 |
Peccary 2 | 90.12 | 86.63 | 81.98 | 88.95 | 77.91 | 83.72 | 86.63 | 90.12 |
Opossum | 94.42 | 96.14 | 93.56 | 97.00 | 97.00 | 94.42 | 96.14 | 93.13 |
Hare 3 | 95.31 | 66.41 | 75.78 | 88.28 | 92.19 | 82.03 | 86.72 | 70.31 |
Tinamou 4 | 73.74 | 65.66 | 74.75 | 75.76 | 75.76 | 68.69 | 81.82 | 41.41 |
Mouflon | 93.86 | 88.60 | 81.58 | 94.74 | 89.47 | 89.47 | 88.60 | 76.32 |
Ocelot | 96.41 | 88.02 | 89.82 | 89.22 | 92.22 | 91.62 | 90.42 | 92.81 |
Paca | 90.71 | 89.29 | 90.71 | 92.14 | 90.00 | 91.43 | 92.14 | 78.57 |
Deer 5 | 99.07 | 96.26 | 96.26 | 98.60 | 98.60 | 97.20 | 96.73 | 94.39 |
Red Deer | 97.46 | 91.86 | 95.93 | 96.69 | 97.46 | 96.95 | 96.69 | 94.40 |
Red Fox | 99.76 | 99.29 | 99.06 | 99.76 | 100 | 62.59 | 61.18 | 97.88 |
Red Squirrel | 99.80 | 98.04 | 98.82 | 100 | 99.80 | 97.45 | 99.61 | 99.21 |
Roe Deer | 94.85 | 97.00 | 95.71 | 97.85 | 97.00 | 97.00 | 98.71 | 94.42 |
Spiny Rat | 98.26 | 96.81 | 97.39 | 98.84 | 98.55 | 95.36 | 96.23 | 95.07 |
Coati 6 | 79.12 | 71.43 | 81.32 | 72.53 | 75.82 | 80.22 | 79.12 | 68.13 |
Deer 7 | 96.72 | 89.34 | 88.52 | 91.80 | 90.16 | 92.62 | 87.70 | 92.62 |
Wild Boar | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 93.47 |
Mouse 8 | 100 | 100 | 99.19 | 97.57 | 100 | 97.57 | 98.38 | 99.19 |
Average | 94.31 | 89.34 | 90.43 | 92.75 | 92.24 | 89.54 | 90.35 | 86.84 |
Species in NACTI-a | DANAS | Species in MCTI | DANAS |
---|---|---|---|
American black bear | 97.94 | Agouti | 92.44 |
American marten | 25.00 | Bird | 97.62 |
American red squirrel | 98.67 | Coiban Agouti | 97.77 |
Black-tailed jackrabbit | 98.60 | Collared Peccary | 90.12 |
Bobcat | 97.36 | Opossum | 93.56 |
California quail | 96.67 | European Hare | 95.31 |
Cougar | 98.29 | Great Tinamou | 69.70 |
Coyote | 95.55 | Mouflon | 93.86 |
Eastern Gray squirrel | 100 | Ocelot | 94.01 |
Elk | 99.24 | Paca | 91.43 |
Gray fox | 99.29 | Red Brocket Deer | 99.07 |
Moose | 97.22 | Red Deer | 97.96 |
Mule deer | 97.98 | Red Fox | 99.76 |
Nine-banded armadillo | 100 | Red Squirrel | 100 |
Raccoon | 98.80 | Roe Deer | 94.85 |
Red deer | 91.44 | Spiny Rat | 98.26 |
Red fox | 63.93 | White-nosed Coati | 78.02 |
Snowshoe hare | 99.24 | White-tailed Deer | 96.72 |
Striped skunk | 99.59 | Wild Boar | 100 |
Virginia opossum | 94.74 | Wood Mouse | 100 |
Wild boar | 95.32 | Average | 94.02 |
Wild turkey | 98.06 | ||
Average | 92.86 |
Method | Search (Seconds) | CIFAR-10 | CIFAR-100 | ImageNet-16-120 | |||
---|---|---|---|---|---|---|---|
Validation | Test | Validation | Test | Validation | Test | ||
REA [56] | 0.03 | 91.56 ± 0.13 | 94.35 ± 0.18 | 73.15 ± 0.49 | 73.05 ± 0.56 | 46.08 ± 0.77 | 46.08 ± 0.78 |
RS [57] | 1.00 | 91.48 ± 0.12 | 94.08 ± 0.26 | 72.63 ± 1.09 | 72.44 ± 0.70 | 45.90 ± 0.58 | 45.64 ± 0.85 |
REINFORCE [45] | 1.00 | 91.70 ± 0.06 | 94.35 ± 0.19 | 73.52 ± 0.30 | 73.43 ± 0.52 | 46.49 ± 0.41 | 45.98 ± 0.72 |
BOHB [58] | 6.12 | 88.52 ± 1.39 | 91.77 ± 1.30 | 62.62 ± 9.73 | 62.74 ± 9.79 | 33.43 ± 9.18 | 33.22 ± 9.51 |
DANAS (ours) | 4.24 | 91.58 ± 0.17 | 94.28 ± 0.21 | 72.85 ± 0.64 | 72.71 ± 0.87 | 45.99 ± 0.56 | 45.75 ± 0.83 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jia, L.; Tian, Y.; Zhang, J. Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images. Animals 2022, 12, 437. https://doi.org/10.3390/ani12040437
Jia L, Tian Y, Zhang J. Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images. Animals. 2022; 12(4):437. https://doi.org/10.3390/ani12040437
Chicago/Turabian StyleJia, Liang, Ye Tian, and Junguo Zhang. 2022. "Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images" Animals 12, no. 4: 437. https://doi.org/10.3390/ani12040437
APA StyleJia, L., Tian, Y., & Zhang, J. (2022). Domain-Aware Neural Architecture Search for Classifying Animals in Camera Trap Images. Animals, 12(4), 437. https://doi.org/10.3390/ani12040437