Deep-Learning-Assisted Multi-Dish Food Recognition Application for Dietary Intake Reporting
Abstract
:1. Introduction
Goals
- Deep-learning-assisted multiple-dish food recognition model was developed that can recognize food items automatically.
- The proposed DL model can work robustly in recognizing the single dishes, mixed dishes, and multiple dishes of local Taiwan cuisines, which could be helpful for deciding the appropriate healthy dietary intake.
- The performance evaluation and comparison with existing state-of-the-art recognition models were conducted.
2. Related Works
3. Methodology
3.1. Data Acquisition and Labeling
3.2. Deep-Learning-Based Food Recognition Model
3.3. Procedures of DL Model
4. Experimental Results
4.1. Evaluation Metrics
4.2. Results of Performance Metrics
4.3. Results of Mean Average Precision
4.4. Results on Model Speed
4.5. Results on Different Dataset
4.6. DL-Based Food Recognition
5. Discussion
5.1. Evaluation of the Model
5.2. Potential Implementation
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Hajat, C.; Stein, E. The global burden of multiple chronic conditions: A narrative review. Prev. Med. Rep. 2018, 12, 284–293. [Google Scholar] [CrossRef] [PubMed]
- Afshin, A.; Sur, P.J.; Fay, K.A.; Cornaby, L.; Ferrara, G.; Salama, J.S.; Mullany, E.C.; Abate, K.H.; Abbafati, C.; Abebe, Z.; et al. Health effects of dietary risks in 195 countries, 1990–2017: A systematic analysis for the Global Burden of Disease Study 2017. Lancet 2019, 393, 1958–1972. [Google Scholar] [CrossRef] [Green Version]
- Springmann, M.; Wiebe, K.; Mason-D’Croz, D.; Sulser, T.B.; Rayner, M.; Scarborough, P. Health and nutritional aspects of sustainable diet strategies and their association with environmental impacts: A global modelling analysis with country-level detail. Lancet Planet. Health 2018, 2, e451–e461. [Google Scholar] [CrossRef] [Green Version]
- Neuhouser, M.L. The importance of healthy dietary patterns in chronic disease prevention. Nutr. Res. 2019, 70, 3–6. [Google Scholar] [CrossRef]
- Debon, R.; Coleone, J.D.; Bellei, E.A.; De Marchi, A.C.B. Mobile health applications for chronic diseases: A systematic review of features for lifestyle improvement. Diabetes Metab.Syndr. Clin. Res. Rev. 2019, 13, 2507–2512. [Google Scholar] [CrossRef]
- Yannakoulia, M.; Mamalaki, E.; Anastasiou, C.A.; Mourtzi, N.; Lambrinoudaki, I.; Scarmeas, N. Eating habits and behaviors of older people: Where are we now and where should we go? Maturitas 2018, 114, 14–21. [Google Scholar] [CrossRef]
- Wang, Y.; Min, J.; Khuri, J.; Xue, H.; Xie, B.; Kaminsky, L.A.; Cheskin, L.J. Effectiveness of mobile health interventions on diabetes and obesity treatment and management: Systematic review of systematic reviews. JMIR mHealth uHealth 2020, 8, e15400. [Google Scholar] [CrossRef]
- Vandellanote, C.; Muller, A.M.; Short, C.E.; Hingle, M.; Nathan, N.; Williams, S.L.; Lopez, M.L.; Parekh, S.; Maher, C.A. Past, present, and future of eHealth and mHealth research to improve physical activity and dietary behaviors. J. Nutr. Educ. Behav. 2016, 48, 219–228.e1. [Google Scholar] [CrossRef]
- Faiola, A.; Papautsky, E.L.; Isola, M. Empowering the aging with mobile health: A mHealth framework for supporting sustainable healthy lifestyle behavior. Curr. Probl. Cardiol. 2019, 44, 232–266. [Google Scholar] [CrossRef]
- Lee, J.A.; Choi, M.; Lee, S.A.; Jiang, N. Effective behavioral intervention strategies using mobile health applications for chronic disease management: A systematic review. BMC Med. Inform. Decis. Mak. 2018, 18, 12. [Google Scholar] [CrossRef] [Green Version]
- Lunde, P.; Nilsson, B.B.; Bergland, A.; Kværner, K.J.; Bye, A. The effectiveness of smartphone apps for lifestyle improvement in noncommunicable diseases: Systematic review and meta-analyses. J. Med. Internet Res. 2018, 20, e9751. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Messner, E.M.; Probst, T.; O’Rourke, T.; Stoyanov, S. mHealth applications: Potentials, limitations, current quality and future directions. Digit. Phenotyping Mob. Sens. 2019, 235–248. [Google Scholar] [CrossRef]
- Cade, J.E. Measuring diet in the 21st century: Use of new technologies. Proc. Nutr. Soc. 2017, 76, 276–282. [Google Scholar] [CrossRef] [PubMed]
- Eldridge, A.L.; Piernas, C.; Illner, A.K.; Gibney, M.J.; Gurinović, M.A.; De Vries, J.H.; Cade, J.E. Evaluation of new technology-based tools for dietary intake assessment—An ILSI Europe Dietary Intake and Exposure Task Force evaluation. Nutrients 2019, 11, 55. [Google Scholar] [CrossRef] [Green Version]
- Angra, S.; Ahuja, S. Machine learning and its applications: A review. In Proceedings of the International Conference on Big Data Analytics and Computational Intelligence (ICBDAC), Chirala, India, 23–25 March 2017; pp. 57–60. [Google Scholar]
- Van Erp, M.; Reynolds, C.; Maynard, D.; Starke, A.; Ibáñez Martín, R.; Andres, F.; Leite, M.C.; Alvarez de Toledo, D.; Schmidt Rivera, X.; Trattner, C.; et al. Using natural language processing and artificial intelligence to explore the nutrition and sustainability of recipes and food. Front. Artif. Intell. 2020, 3, 621577. [Google Scholar] [CrossRef]
- De Moraes Lopes, M.H.B.; Ferreira, D.D.; Ferreira, A.C.B.H.; da Silva, G.R.; Caetano, A.S.; Braz, V.N. Use of artificial intelligence in precision nutrition and fitness. In Artificial Intelligence in Precision Health; Academic Press: Cambridge, MA, USA, 2020; pp. 465–496. [Google Scholar]
- Zhao, X.; Xu, X.; Li, X.; He, X.; Yang, Y.; Zhu, S. Emerging trends of technology-based dietary assessment: A perspective study. Eur. J. Clin. Nutr. 2021, 75, 582–587. [Google Scholar] [CrossRef]
- Hussain, G.; Maheshwari, M.K.; Memon, M.L.; Jabbar, M.S.; Javed, K. A CNN based automated activity and food recognition using wearable sensor for preventive healthcare. Electronics 2019, 8, 1425. [Google Scholar] [CrossRef] [Green Version]
- Ortega Anderez, D.; Lotfi, A.; Pourabdollah, A. A deep learning based wearable system for food and drink intake recognition. J. Ambient. Intell. Humaniz. Comput. 2021, 12, 9435–9447. [Google Scholar] [CrossRef]
- Lee, K.S. Automatic Estimation of Food Intake Amount Using Visual and Ultrasonic Signals. Electronics 2021, 10, 2153. [Google Scholar] [CrossRef]
- Fakhrou, A.; Kunhoth, J.; Al Maadeed, S. Smartphone-based food recognition system using multiple deep CNN models. Multimed. Tools Appl. 2021, 80, 33011–33032. [Google Scholar] [CrossRef]
- Sun, J.; Radecka, K.; Zilic, Z. FoodTracker: A Real-time Food Detection Mobile Application by Deep Convolutional Neural Networks. arXiv 2019, preprint. arXiv:1909.05994. [Google Scholar]
- Boushey, C.; Spoden, M.; Zhu, F.; Delp, E.; Kerr, D. New mobile methods for dietary assessment: Review of image-assisted and image-based dietary assessment methods. Proc. Nutr. Soc. 2017, 76, 283–294. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Van Asbroeck, S.; Matthys, C. Use of Different Food Image Recognition Platforms in Dietary Assessment: Comparison Study. JMIR Form. Res. 2020, 4, e15602. [Google Scholar] [CrossRef] [PubMed]
- Allegra, D.; Battiato, S.; Ortis, A.; Urso, S.; Polosa, R. A review on food recognition technology for health applications. Health Psychol. Res. 2020, 30, 8. [Google Scholar] [CrossRef] [PubMed]
- Jiang, L.; Qiu, B.; Liu, X.; Huang, C.; Lin, K. DeepFood: Food image analysis and dietary assessment via deep model. IEEE Access 2020, 8, 47477–47489. [Google Scholar] [CrossRef]
- Min, W.; Jiang, S.; Liu, L.; Rui, Y.; Jain, R. A survey on food computing. ACM Comput. Surv. 2019, 52, 1–36. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.C.; Chen, C.H.; Lin, Y.S.; Chen, H.Y.; Irianti, D.; Jen, T.N.; Yeh, J.Y.; Chiu, S.Y.H. Design and usability evaluation of mobile voice-added food reporting for elderly people: Randomized controlled trial. JMIR mHealth uHealth 2020, 8, e20317. [Google Scholar] [CrossRef]
- Meyers, A.; Johnston, N.; Rathod, V.; Korattikara, A.; Gorban, A.; Silberman, N.; Guadarrama, S.; Papandreou, G.; Huang, J.; Murphy, K.P. Im2Calories: Towards an automated mobile vision food diary. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1233–1241. [Google Scholar]
- Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; pp. 675–678. [Google Scholar]
- Pouladzadeh, P.; Yassine, A.; Shirmohammadi, S. FooDD: An image-based food detection dataset for calorie measurement. In Proceedings of the International Conference on Multimedia Assisted Dietary Management, Genova, Italy, 7–8 September 2015. [Google Scholar]
- Aizawa, K.; Maruyama, Y.; Li, H.; Morikawa, C. Food balance estimation by using personal dietary tendencies in a multimedia food log. IEEE Trans. Multimed. 2013, 15, 2176–2185. [Google Scholar] [CrossRef]
- Aizawa, K.; Ogawa, M. Foodlog: Multimedia tool for healthcare applications. IEEE MultiMedia 2015, 22, 4–8. [Google Scholar] [CrossRef]
- Horiguchi, S.; Amano, S.; Ogawa, M.; Aizawa, K. Personalized classifier for food image recognition. IEEE Trans. Multimed. 2018, 20, 2836–2848. [Google Scholar] [CrossRef] [Green Version]
- Yu, Q.; Anzawa, M.; Amano, S.; Ogawa, M.; Aizawa, K. Food image recognition by personalized classifier. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 171–175. [Google Scholar]
- Anzawa, M.; Amano, S.; Yamakata, Y.; Motonaga, K.; Kamei, A.; Aizawa, K. Recognition of multiple food items in a single photo for use in a buffet-style restaurant. IEICE Trans. Inf. Syst. 2019, 102, 410–414. [Google Scholar] [CrossRef] [Green Version]
- Foodvisor. Available online: www.foodvisor.io (accessed on 14 February 2022).
- SnapCalorie. Available online: www.snapcalorie.com (accessed on 14 February 2022).
- Knez, S.; Šajn, L. Food object recognition using a mobile device: Evaluation of currently implemented systems. Trends Food Sci. Technol. 2020, 99, 460–471. [Google Scholar] [CrossRef]
- Pouladzadeh, P.; Shirmohammadi, S. Mobile multi-food recognition using deep learning. ACM Trans. Multimed. Comput. Commun. Appl. 2017, 13, 1–21. [Google Scholar] [CrossRef]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10781–10790. [Google Scholar]
- Wang, S.; Liu, Y.; Qing, Y.; Wang, C.; Lan, T.; Yao, R. Detection of insulator defects with improved ResNest and region proposal network. IEEE Access 2020, 8, 184841–184850. [Google Scholar] [CrossRef]
- Onthoni, D.D.; Sheng, T.W.; Sahoo, P.K.; Wang, L.J.; Gupta, P. Deep learning assisted localization of polycystic kidney on contrast-enhanced CT images. Diagnostics 2020, 10, 1113. [Google Scholar] [CrossRef] [PubMed]
- Magalhães, S.A.; Castro, L.; Moreira, G.; Dos Santos, F.N.; Cunha, M.; Dias, J.; Moreira, A.P. Evaluating the single-shot multibox detector and YOLO deep learning models for the detection of tomatoes in a greenhouse. Sensors 2021, 21, 3569. [Google Scholar] [CrossRef] [PubMed]
- Tsai, M.F.; Lin, P.C.; Huang, Z.H.; Lin, C.H. Multiple Feature Dependency Detection for Deep Learning Technology—Smart Pet Surveillance System Implementation. Electronics 2020, 9, 1387. [Google Scholar] [CrossRef]
- Ramesh, A.; Sivakumar, A.; Angel, S.S. Real-time Food-Object Detection and Localization for Indian Cuisines using Deep Neural Networks. In Proceedings of the 2020 IEEE International Conference on Machine Learning and Applied Network Technologies (ICMLANT), Hyderabad, India, 20–21 December 2020; pp. 1–6. [Google Scholar]
- Kaggle. Indian Food Image Dataset. Available online: https://www.kaggle.com/datasets/iamsouravbanerjee/indian-food-images-dataset (accessed on 6 May 2022).
- Liu, Y.C.; Chen, C.H.; Tsou, Y.C.; Lin, Y.S.; Chen, H.Y.; Yeh, J.Y.; Chiu, S.Y.H. Evaluating mobile health apps for customized dietary recording for young adults and seniors: Randomized controlled trial. JMIR mHealth uHealth 2019, 7, e10931. [Google Scholar] [CrossRef] [Green Version]
Ref # | Food Type/Task | Contribution | # of Considered Food Items | Performance |
---|---|---|---|---|
[22] | Single-dish and fruits/ detection | Develop application for children with visual impairments | 29 | 0.95 |
[23] | Mixed-dish/ detection | Display food nutrition factors | 356 | 0.75 |
[30] | Mixed-dish/ segmentation | Estimate food size | 201 | 0.25 |
[37] | Multiple-dish/ detection | Develop application for buffet-style food | 50 | 0.78 |
Our work | Single/mixed/multiple-dish/set meal detection | Improve dietary intake reporting | 87 | 0.92 |
Food Type | Cooking Type | Abbreviations | Total Items |
---|---|---|---|
Single-dish | Stir-Fried | SDSF | 24 |
Boiled | SDBO | 4 | |
Braised | SDBR | 16 | |
Steamed | SDST | 4 | |
Pan-Fried | SDPF | 8 | |
Mixed-dish | Stir-Fried | MDSF | 20 |
Braised | MDBO | 7 | |
Boiled | MDBO | 4 |
Block # | Input Filter | Kernel Size | Stride # | # of Repetitions | Output Filter |
---|---|---|---|---|---|
p1 | 32 | 3 | 1 | 1 | 16 |
p2 | 16 | 3 | 2 | 2 | 24 |
p3 | 24 | 5 | 2 | 2 | 40 |
p4 | 40 | 3 | 2 | 3 | 80 |
p5 | 80 | 5 | 1 | 3 | 112 |
p6 | 112 | 5 | 2 | 4 | 192 |
p7 | 192 | 3 | 1 | 1 | 320 |
Architecture | Evaluation Metrics ± SD | |||
---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | |
Our Model | 0.87 (±0.01) | 0.88 (±0.01) | 0.97 (±0.01) | 0.93 (±0.01) |
SSD Inception V2 | 0.53 (±0.03) | 0.6 (±0.03) | 0.56 (±0.04) | 0.56 (±0.04) |
Faster R-CNN Inception ResNet V2 | 0.54 (±0.02) | 0.64 (±0.04) | 0.53 (±0.09) | 0.57(±0.06) |
Architectures | R1 | R2 | R3 | R4 | R5 | mAP | ±SD |
---|---|---|---|---|---|---|---|
Our Model | 0.88 | 0.90 | 0.91 | 0.91 | 0.89 | 0.90 | ±0.01 |
SSD Inception V2 | 0.63 | 0.55 | 0.56 | 0.59 | 0.53 | 0.577 | ±0.04 |
Faster R-CNN Inception ResNet V2 | 0.51 | 0.75 | 0.77 | 0.63 | 0.55 | 0.646 | ±0.1 |
Type of Dishes | Total Number of Food Items | Avg AP | mAP |
---|---|---|---|
Single-dish + included in multiple-dish | 56 | 0.88 | 0.92 |
Mixed-dish + included in multiple-dish | 31 | 0.96 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.-C.; Onthoni, D.D.; Mohapatra, S.; Irianti, D.; Sahoo, P.K. Deep-Learning-Assisted Multi-Dish Food Recognition Application for Dietary Intake Reporting. Electronics 2022, 11, 1626. https://doi.org/10.3390/electronics11101626
Liu Y-C, Onthoni DD, Mohapatra S, Irianti D, Sahoo PK. Deep-Learning-Assisted Multi-Dish Food Recognition Application for Dietary Intake Reporting. Electronics. 2022; 11(10):1626. https://doi.org/10.3390/electronics11101626
Chicago/Turabian StyleLiu, Ying-Chieh, Djeane Debora Onthoni, Sulagna Mohapatra, Denisa Irianti, and Prasan Kumar Sahoo. 2022. "Deep-Learning-Assisted Multi-Dish Food Recognition Application for Dietary Intake Reporting" Electronics 11, no. 10: 1626. https://doi.org/10.3390/electronics11101626
APA StyleLiu, Y. -C., Onthoni, D. D., Mohapatra, S., Irianti, D., & Sahoo, P. K. (2022). Deep-Learning-Assisted Multi-Dish Food Recognition Application for Dietary Intake Reporting. Electronics, 11(10), 1626. https://doi.org/10.3390/electronics11101626