Identifying Queenlessness in Honeybee Hives from Audio Signals Using Machine Learning
Abstract
:1. Introduction
2. Related Previous Work
3. Datasets and Methods
3.1. Data
3.2. Methodologies
3.2.1. Features
(i) Mel Frequency Cepstral Coefficients
(ii) Spectrograms
3.2.2. Classifiers
(i) Multi-Layer Perceptron
(ii) Logistic Regression
(iii) Long Short-Term Memory (LSTM)
(iv) Convolutional Neural Networks
4. Results
4.1. Preliminary Analysis-Time Domain
4.2. Frequency Domain Analysis
4.3. Mel Frequency Domain
4.4. Evaluation Metrics
4.5. Machine Learning Experiments
4.5.1. MFCCs as Features
4.5.2. Spectrograms as Features
5. Discussion
6. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Neumann, P.; Blacquière, T. The Darwin cure for apiculture? Natural selection and managed honeybee health. Evol. Appl. 2016, 10, 226–230. [Google Scholar] [CrossRef] [PubMed]
- The World Wide Fund for Nature. Available online: https://www.wwf.org.uk/sites/default/files/2019-05/EofE%20bee%20report%202019%20FINAL_17MAY2019.pdf (accessed on 12 June 2020).
- Sharma, V.P.; Kumar, N.R. Changes in honey bee behaviour and biology under the influence of cell phone radiations. Curr. Sci. 2010, 98, 1376–1378. [Google Scholar]
- Boys, R. Listen to the Bees. Available online: https://beedata.com.mirror.hiveeyes.org/data2/listen/listenbees.htm (accessed on 4 January 2023).
- Terenzi, A.; Cecchi, S.; Orcioni, S.; Piazza, F. Features Extraction Applied to the Analysis of the Sounds Emitted by Honeybees in a Beehive. In Proceedings of the 11th International Symposium on Image and Signal Processing and Analysis (ISPA), Dubrovnik, Croatia, 23–25 September 2019; pp. 3–8. [Google Scholar] [CrossRef]
- Kirchner, W.H. Acoustical Communication in Honeybees. Apidologie 1993, 24, 297–307. [Google Scholar] [CrossRef]
- Howard, D.; Duran, O.; Hunter, G. Signal Processing the Acoustics of Honeybees (Apis mellifera) to Identify the ‘Queenless’ State in Hives. In Proceedings of the Institute of Acoustics, Nottingham, UK, 13 May 2013. [Google Scholar]
- Seeley, T.D.; Tautz, J. Worker Piping in Honeybee Swarms and its Role in Preparing for Liftoff. J. Comp. Physiol. 2001, 187, 667–676. [Google Scholar] [CrossRef] [PubMed]
- Ferrari, S.; Silva, M.; Guarino, M.; Berckmans, D. Monitoring of swarming sounds in beehives for early detection of the swarming period. Comput. Electron. Agric. 2008, 64, 72–77. [Google Scholar] [CrossRef]
- Ruvinga, S.; Hunter, G.J.A.; Duran, O.; Nebel, J.C. Use of LSTM Networks to Identify “Queenlessness” in Honeybee Hives from Audio Signals. In Proceedings of the 17th International Conference on Intelligent Environments (IE2021), Dubai, United Arab Emirates, 21–24 June 2021. [Google Scholar] [CrossRef]
- Scheiner, R.; Abramson, C.I. Standard methods for behavioral studies of Apis mellifera. J. Apic. Res. 2013, 52, 1–58. [Google Scholar] [CrossRef]
- Shaw, J.; Nugent, P. Long-wave infrared imaging for non-invasive beehive population assessment. Opt. Express 2011, 19, 399. [Google Scholar] [CrossRef] [PubMed]
- Murphy, F.E.; Magno, M.; O’Leary, L. Big brother for bees (3B)—Energy neutral platform for remote monitoring of beehive imagery and sound. In Proceedings of the 6th IEEE International Workshop on Advances in Sensors and Interfaces (IWASI), Gallipoli, Italy, 18–19 June 2015. [Google Scholar] [CrossRef]
- Campbell, J.; Mummert, L.; Sukthankar, R. Video monitoring of honey bee colonies at the hive entrance. In Proceedings of the Visual Observation and Analysis of Animal and Insect Behavior, ICPR 2008, Tampa, FL, USA, 8–11 December 2008; pp. 1–4. [Google Scholar]
- Kachole, S.; Hunter, G.; Duran, O. A Computer Vision Approach to Monitoring the Activity and Well-Being of Honeybees. In Proceedings of the IE 2020: 16th International Conference on Intelligent Environments, Madrid, Spain, 20–23 July 2020. [Google Scholar] [CrossRef]
- Crawford, E.; Leidenberger, S.; Norrström, N.; Niklasson, M. Using Video Footage for Observing Honeybee Behavior at Hive Entrances. Bee World 2022, 99, 139–142. [Google Scholar] [CrossRef]
- Wenner, A.M. Sound Communication in Honeybees. Sci. Am. 1964, 210, 116–124. [Google Scholar] [CrossRef]
- Eren, H.; Whiffler, L.; Manning, R. Electronic sensing and identification of queen bees in honeybee colonies. In Proceedings of the Instrumentation and Measurement Technology Conference, Ottawa, ON, Canada, 19–21 May 1997. [Google Scholar] [CrossRef]
- Žgank, A. Acoustic Monitoring and Classification of Bee Swarm Activity using MFCC Feature Extraction and HMM Acoustic Modelling. In Proceedings of the ELEKTRO 2018, Mikulov, Czech Republic, 21–23 May 2018. [Google Scholar] [CrossRef]
- Robles-Guerrero, A.; Saucedo-Anaya, T.; González-Ramírez, E.; De la Rosa-Vargas, J.I. Analysis of a multiclass classification problem by Lasso Logistic Regression and Singular Value Decomposition to identify sound patterns in queen-less bee colonies. Comput. Electron. Agric. 2019, 159, 69–74. [Google Scholar] [CrossRef]
- Peng, R.; Ardekani, L.; Sharifzadeh, H. An Acoustic Signal Processing System for Identification of Queen-less Beehives. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Auckland, New Zealand, 7–10 December 2020; Available online: https://ieeexplore.ieee.org/document/9306388 (accessed on 14 January 2023).
- Hochreiter, S.; Schmidhuber, J. Long Short-term Memory. Neural Comput. 1997, 8, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef] [Green Version]
- Pheng, F.; Song, Z. Comparison of Different Implementations of MFCC. J. Comput. Sci. Technol. 2001, 16, 582–589. [Google Scholar]
- Davis, S.; Mermelstein, P. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. Acoust. Speech Signal Process 1980, 28, 357–366. [Google Scholar] [CrossRef] [Green Version]
- Ganchev, T.; Fakotakis, N.; Kokkinakis, G. Comparative evaluation of various MFCC implementations on the speaker verification task. In Proceedings of the 10th International Conference on Speech and Computer, Patras, Greece, 17–19 October 2005. [Google Scholar]
- Beritelli, F.; Grasso, R. A pattern recognition system for environmental sound classification based on MFCCs and neural networks. In Proceedings of the 2nd International Conference on Signal Processing and Communication Systems, Gold Coast, Australia, 15–17 December 2008. [Google Scholar] [CrossRef]
- Kour, G.; Mehan, N. Music Genre Classification using MFCC, SVM and BPNN. Int. J. Comput. Appl. 2015, 112, 6. [Google Scholar]
- Deng, M.; Meng, T.; Cao, J.; Wang, S.; Zhang, J.; Fan, H. Heart sound classification based on improved MFCC features and convolutional recurrent neural networks. Neural Netw. 2020, 130, 22–32. [Google Scholar] [CrossRef] [PubMed]
- Mohamed, A. Deep Neural Network Acoustic Models for ASR. Ph.D. Thesis, University of Toronto, Toronto, ON, Canada, 2014. [Google Scholar]
- Shimodaira, H.; Rennals, S. Speech Signal Analysis. 2013. Available online: https://www.inf.ed.ac.uk/teaching/courses/asr/2012-13/asr02-signal-4up.pdf (accessed on 4 January 2023).
- Paliwal, K.; Lyons, J.; Wojcicki, K. Preference for 20–40 ms window duration in speech analysis. In Proceedings of the 4th International Conference on Signal Processing and Communication Systems, Gold Coast, Australia, 13–15 December 2010. [Google Scholar] [CrossRef] [Green Version]
- Wyse, L. Audio spectrogram representations for processing with convolutional neural networks. arXiv 2017, arXiv:1706.09559. [Google Scholar]
- Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
- Carling, A. Introduction to Neural Networks; Sigma Press: Cheshire, UK, 1992. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning. Available online: http://www.deeplearningbook.org (accessed on 14 January 2023).
- Gibaru, O. Neural Network. Available online: https://www.oliviergibaru.org/courses/ML_NeuralNetwork.html (accessed on 4 May 2022).
- Ng, A.; Katanforoosh, K.; Bensouda Mourri, Y. Sequence Models. Available online: https://www.coursera.org/learn/nlp-sequence-models (accessed on 12 January 2022).
- Olah, C. Understanding LSTM Networks. Available online: http://colah.github.io/posts/2015-08-Understanding-LSTMs/ (accessed on 18 January 2023).
- Sak, H.; Senior, A.; Beaufays, F. Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling. In Proceedings of the INTERSPEECH 2014 (15th Annual Conference of the International Speech Communication Association), Singapore, 14–18 September 2014; pp. 338–342. [Google Scholar]
- Ratan, P. What Is the Convolutional Neural Network Architecture? 2021. Available online: https://www.analyticsvidhya.com/blog/2020/10/what-is-the-convolutional-neural-network-architecture/ (accessed on 14 November 2022).
- Mathworks.com. MFCC. Available online: https://uk.mathworks.com/help/audio/ref/mfcc.html (accessed on 12 January 2022).
- Scikit-learn. LogisticRegression. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html (accessed on 15 January 2023).
- Scikit-Learn. MLPClassifier. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html (accessed on 15 January 2022).
- Keras. LSTM layer. Available online: https://keras.io/api/layers/recurrent_layers/lstm/ (accessed on 13 January 2022).
- Kingma, D.P.; Ba, J.L. ADAM: A Method for Stochastic Optimization. arXiv 2015, arXiv:1412.6980. [Google Scholar]
- Hinton, G.; Srivastava, S.; Swersky, K. Neural Networks for Machine Learning—Lecture 6e—Rmsprop: Divide the Gradient by a Running Average of Its Recent Magnitude. 2012. Available online: http://www.cs.toronto.edu/~hinton/coursera/lecture6/lec6.pdf (accessed on 17 February 2023).
- Keras. Convolution Layers. Available online: https://keras.io/api/layers/convolution_layers/ (accessed on 18 January 2022).
- Open-Source Beehives Project. Available online: https://zenodo.org/communities/opensourcebeehives/?page=1&size=20 (accessed on 17 January 2023).
- Nolasco, I.; Benetos, E. To be or not to bee: Investigating machine learning approaches for beehive sound recognition. arXiv 2018, arXiv:1811.06016. [Google Scholar] [CrossRef]
- Cecchi, S.; Terenzi, A.; Orcioni, S.; Riolo, P.; Ruschioni, S.; Isidoro, N. A preliminary study of sounds emitted by honeybees in a beehive. In Proceedings of the 144th Convention of the Audio Engineering Society, Paper 9981, Milan, Italy, 23–26 May 2018; Available online: http://www.aes.org/e-lib/browse.cfm?elib=19498 (accessed on 24 February 2023).
- Terenzi, A.; Cecchi, S.; Spinsante, S. On the Importance of the Sound Emitted by Honey Bee Hives. Vet. Sci. 2020, 7, 168. [Google Scholar] [CrossRef]
Recording Date Apis Mellifera Sub-Species | 3 August 2012 | 4 August 2012 | 5 August 2012 | 6 August 2012 | 7 August 2012 | 8 August 2012 | 9 August 2012 |
---|---|---|---|---|---|---|---|
Ligustica | QP | QP/QA | QA | QA | QA | QA | QA |
Ligustica © | QP | QP | QP | QP | QP | QP | QP |
Carnica | QP | QP/QA | QA | QA | QA | QA | QA |
Carni ©(C) | QP | QP | QP | QP | QP | QP | QP |
Model Validation Accuracy | ||||
---|---|---|---|---|
Dataset | Number of Features | Logistic Regression | MLP | LSTM |
Dataset 1 | 14 | 0.8743 | 0.9008 | 0.9178 |
Dataset 2 | 13 | 0.8554 | 0.8990 | 0.9038 |
Dataset 3 | 13 | 0.8704 | 0.8979 | 0.9108 |
Dataset 4 | 12 | 0.8141 | 0.8479 | 0.8744 |
Model | Precision | Recall | F1-Score | Training Accuracy | Validation Accuracy | Test Accuracy |
---|---|---|---|---|---|---|
LSTM | 0.92 | 0.92 | 0.92 | 0.9180 | 0.9178 | 0.9181 |
CNN | 0.9931 | 0.9931 | 0.9931 | 0.9912 | 0.9931 | 0.9900 |
Model + Dataset | LSTM Model, Arnia Data | LSTM Surrey, UK Data | CNN Model, Arnia Data | CNN, Surrey, UK Data |
---|---|---|---|---|
Mean | 0.9179 | 0.9080 | 0.9465 | 0.9861 |
Standard deviation | 0.001 | 0.0034 | 0.0500 | 0.0116 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ruvinga, S.; Hunter, G.; Duran, O.; Nebel, J.-C. Identifying Queenlessness in Honeybee Hives from Audio Signals Using Machine Learning. Electronics 2023, 12, 1627. https://doi.org/10.3390/electronics12071627
Ruvinga S, Hunter G, Duran O, Nebel J-C. Identifying Queenlessness in Honeybee Hives from Audio Signals Using Machine Learning. Electronics. 2023; 12(7):1627. https://doi.org/10.3390/electronics12071627
Chicago/Turabian StyleRuvinga, Stenford, Gordon Hunter, Olga Duran, and Jean-Christophe Nebel. 2023. "Identifying Queenlessness in Honeybee Hives from Audio Signals Using Machine Learning" Electronics 12, no. 7: 1627. https://doi.org/10.3390/electronics12071627
APA StyleRuvinga, S., Hunter, G., Duran, O., & Nebel, J.-C. (2023). Identifying Queenlessness in Honeybee Hives from Audio Signals Using Machine Learning. Electronics, 12(7), 1627. https://doi.org/10.3390/electronics12071627