Next Article in Journal
Effect of Gap Length and Partition Thickness on Thermal Boundary Layer in Thermal Convection
Previous Article in Journal
Generalized Toffoli Gate Decomposition Using Ququints: Towards Realizing Grover’s Algorithm with Qudits
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Attention-Based Convolutional Neural Network for Ingredients Identification

School of Electronic Information, Hangzhou Dianzi University, Hangzhou 310005, China
*
Authors to whom correspondence should be addressed.
Entropy 2023, 25(2), 388; https://doi.org/10.3390/e25020388
Submission received: 2 December 2022 / Revised: 10 February 2023 / Accepted: 12 February 2023 / Published: 20 February 2023

Abstract

:
In recent years, with the development of artificial intelligence, smart catering has become one of the most popular research fields, where ingredients identification is a necessary and significant link. The automatic identification of ingredients can effectively reduce labor costs in the acceptance stage of the catering process. Although there have been a few methods for ingredients classification, most of them are of low recognition accuracy and poor flexibility. In order to solve these problems, in this paper, we construct a large-scale fresh ingredients database and design an end-to-end multi-attention-based convolutional neural network model for ingredients identification. Our method achieves an accuracy of 95.90% in the classification task, which contains 170 kinds of ingredients. The experiment results indicate that it is the state-of-the-art method for the automatic identification of ingredients. In addition, considering the sudden addition of some new categories beyond our training list in actual applications, we introduce an open-set recognition module to predict the samples outside the training set as the unknown ones. The accuracy of open-set recognition reaches 74.6%. Our algorithm has been deployed successfully in smart catering systems. It achieves an average accuracy of 92% in actual use and saves 60% of the time compared to manual operation, according to the statistics of actual application scenarios.

1. Introduction

The catering industry is one of the country’s leading industries [1,2]. Government agencies, private restaurants, and school units are all inseparable from catering. In recent years, with the development of technologies, artificial intelligence has been being used almost everywhere and is considered a core skill for the future. The AI market is projected to grow to $190 billion by 2025 [3]. Under such a trend, smart catering [4] combining artificial intelligence with traditional catering is becoming increasingly popular. Although the descriptions of artificial intelligence are various, the core of it is widely believed to be the research theories, methods, technologies, and applications for simulating, extending, and expanding human intelligence [5]. Integrating these advanced technologies with the catering industry has become a research hot spot [6]. Its main advantages are as follows: (1) Using digital technology, digital thinking, and digital cognition to promote the digital transformation and intelligent upgrade of the industry; (2) Using artificial intelligence [7] to replace traditional manual labor and significantly reduce labor costs; (3) Forcing deep-level systemic system remodeling [8], promoting fundamental changes in management processes and rules; (4) Relying on the “Ingredient Standard Acceptance Pictorial Database”, the data-based and standardized management of the whole process of food bidding procurement, acceptance inspection, supplier and personnel assessment, to achieve the purpose of system optimization and management innovation. The acceptance test of ingredients is the first and crucial step in the entire catering workflow. At present, almost all catering companies need staff to distribute and inspect the ingredients provided by the suppliers, which is time-consuming and lacks uniform standards. Therefore, many researchers are committed to realizing intelligent catering with the help of automated identification technology, to improve distribution efficiency and reduce the use of manpower. As a result, quickly and accurately identifying the ingredients becomes a critical part of smart catering.
Currently, the ingredients identification approaches can be roughly divided into two categories: traditional manual feature extraction methods and deep learning methods. For example, He et al. [9] extracted the global and local features of the food image using methods such as K-nearest neighbors and vocabulary tree to achieve 64.5% accuracy in 42 types of cooked food classification; Nguyen et al. [10] proposed a classification method using local appearance information and global structure information, achieving 69% accuracy in the classification of six kinds of foods; Farinella et al. [11] achieved the accuracy of 67.9% in the 61 classifications of the Pittsburgh fast food image dataset based on the Bag of Textons image representation method combined with the support vector machine (SVM); Joutou et al. [12] introduced multi-core learning to extract various image features such as color, texture, and scale-invariant feature transform (SIFT) to achieve the accuracy of 61.34% in 50 kinds of food. Liao et al. [13] used deep learning methods and the maximum class spacing loss function to recognize food images with an accuracy of 69.2%; Wu et al. [14] used a deep learning algorithm based on domain confusion and prior trees combined with order information to achieve an accuracy of 72.45% in the classification of 60 kinds of fresh food ingredients.
From the above research status, it can be seen that most studies focus on cooked food rather than fresh ingredients, and even the identification research involving fresh ingredients only involves vegetables and fruits. These studies still have some shortcomings: (1) The lack of an applicable large-scale ingredients database. (2) The accuracy of ingredients recognition is low. (3) They cannot deal with new categories outside the database. To overcome the problems above, in this paper we build a large-scale database and design an end-to-end attention-based DenseNet [15] model with an open-set module. The attention module can help improve accuracy by reducing the background interference in the pictures. The open-set module can deal with new categories outside the database.
The main workflow of our algorithm is as follows. First, we obtain images from different application scenarios and name the ingredients according to a unified naming standard. After data cleaning, we establish a large-scale database that contains more than 65,000 pictures of ingredients and conduct the data augmentation. Second, we design a DenseNet model with a multi-attention module to help extract features more effectively. The attention module includes a channel module and a spatial module and helps the model focus on crucial information. Then, we input the images into the network for training and obtain a classification model. To deal with new categories outside the database, we add an open-set module after the classification network to get the prediction result in the open-set condition. The open-set recognition [16] module is relatively independent so it does not affect the primary network model.
The main contributions of this paper are:
(1)
Constructing a large-scale fresh food image database for common ingredients in China for the first time.
(2)
Building an end-to-end multi-attention based DenseNet model for ingredients identification and achieving high accuracy.
(3)
Applying open-set recognition for the first time in the field of ingredients recognition to help solve the problem of new categories in practical applications.
The contents of each part of this paper are arranged as follows: Section 1 is an overview of this paper, explaining the research background; Section 2 is experimental data and methods, introducing the construction process of our ingredients database and the whole network model; in Section 3, experimental results show the performance of our model in the ingredients recognition task; and the discussion and conclusion are drawn in Section 4 and Section 5, respectively.

2. Materials and Methods

2.1. Database Building

2.1.1. Images Collection

In order to better ensure the generalization of the algorithm, it is necessary to ensure the diversity of data as much as possible. In this paper, we consider the ingredients’ source, image taking equipment, shooting season, and ingredients state during the image acquisition. To ensure the samples’ diversity, we choose four scenarios for data collection: catering companies, school canteens, farmer’s markets, and the Internet. Examples of pictures collected in different ways are shown in Figure 1. In catering companies and school canteens, we install a smart device with cameras to take pictures of the ingredients, as shown in Figure 1a. The shooting resolution is uniformly set to 800 × 600. The ingredients are placed in random-colored baskets. During the shooting process, proper manual operations are adopted to increase diversity such as placing the ingredients in different positions of the frame, changing the relative position between the ingredients, and changing the quantity of the ingredients. For some ingredients that may have packaging bags, both pictures with and without the original packaging are collected. In the market, as shown in Figure 1b, we use mobile phones to get more pictures with different lighting, angles, and degrees of freshness. This work lasts for one year to ensure that pictures of those seasonal solid ingredients can be collected. Additionally, we collect copyright-free ingredients pictures from the Internet, as shown in Figure 1c. These pictures usually have more complicated backgrounds, which can further increase the diversity of samples. Finally, a large-scale food data set with more than 400 types of ingredients is obtained, containing 68,637 images. The statistics of the food data set are as Table 1.

2.1.2. Ingredients Naming

One of the purposes of this paper is to build a large-scale fresh food database. We first formulate a set of unified naming standards referring to national standards. We use the English letter plus number naming method to classify the foods in multiple levels. The naming consists of four parts. The first part is category, including non-staple food (NF), meat and poultry (NR), fruits and vegetables (NS), and aquatic products (NX). The second part is the subclass under the category. For example, in Figure 2, ‘G’ represents the rhizomes class under ‘NS’. The third part is the specific class represented by Chinese abbreviations. The last part is the serial number of the specific image. For example, the first picture of the class eggplant is named NSGQZ001, as shown in Figure 2. All the fresh food ingredients in the library are named in this way. This naming method can reflect the degree of differentiation between different ingredient categories and are conducive to statistical results.

2.2. Images Augmentation

Since the resolution and size of pictures from different sources are not the same, we apply a unified re-sampling operation to process the pictures to the same resolution. We tried various resolutions during the model training process and found that the accuracy was the highest when the resolution was set to 600 × 600, indicating that the network can extract more useful information at this resolution. The accuracy at different resolutions is shown in Table 2.
On the other hand, we apply a variety of methods to realize data augmentation, including transposition, color jitter and random erasing, as shown in Figure 3.
As shown in Figure 3a, the transposition method includes random flips and rotations. The translation range is from 100 pixels to −100 pixels in the horizontal and vertical directions. The rotation angle was between 90 degrees and −90 degrees. The number of images increases three times after transposition methods. As shown in Figure 3b, the color jitter method includes random changes in brightness, contrast and saturation. During training, the brightness and contrast enhancement factors are randomly between 0.5 and 1.5, and the saturation enhancement factor is randomly between 0 and 2. After color jitter methods, the number of images increased three times. As shown in Figure 3c, the random erasing scale is randomly between 0.02 to 0.05. After erasing methods, the number of images increased two times. Finally, after data augmentation, we obtain a data set which is 18 times the original one.

2.3. Attention-Based Convolution Neural Network

The classification algorithm network designed in this paper is inspired by the structure of the DenseNet model, and we add a multi-attention module. The overall structure is shown in the Figure 4.

2.3.1. Densely Connected Convolution Network

DenseNet fully applies the idea of cross-layer connection to every single layer in the module, which means the input of any convolution layer contains the output of all convolution layers before. This structure can integrate high and low-level features so that the features can be fully reused, which effectively suppresses over-fitting and reduces the number of parameters. The core of the DenseNet structure is the dense block and the transition layer.
The dense block is a densely connected module and the transition layer is the connection area between two adjacent dense blocks. The connections inside each dense block are shown in Figure 4. The processing of input features can be expressed as:
X l = H l X 0 , X 1 , , X l 1
where X 0 is the input, H is the nonlinear conversion function including three operations of batch normalization (BN) [17], linear rectification function (ReLU) [18] and two convolution layers with the kernel size of 1 × 1 and 3 × 3, respectively. X l represents the output of the l th layer.
The B N normalizes the data to meet the standard normal distribution, so it reduces the calculation time of the entire training set gradient. The calculation method is as:
y i b = B N x i b = γ x i b μ x i σ x i 2 + ε + β
where x i b represents the value of the i th input node of the layer when the input is the b th sample of the current batch, x i is a row vector composed of x i 1 , x i 2 , x i 3 , , x i m and the length is the batch size value m . μ and σ represent the mean and standard deviation, respectively. ε is the minimal amount introduced. γ and β are the scale and shift parameters of the row. y i b is the normalized result. We choose to use ReLU as activation function. Comparing to other activation functions such as S i g m o i d function and Tanh function, ReLU can make the process of gradient descent and error back propagation more efficient, and at the same time avoid the problem of gradient disappearance. The ReLU function is a piece-wise linear function, which turns all negative values into 0, while the positive values remain unchanged, which is a one-sided suppression. When the input is a negative value, the neuron will not be activated, that is, only some neurons will be activated at the same time, which makes the neurons in the neural network have sparse activation, thereby speeding up the calculation efficiency.
The transition layer (as shown in Figure 5, white box) between every two dense blocks contains three parts: BN, convolution layer and average pooling layer. Its application is mainly to reduce the output feature dimension of dense blocks and improve the calculation efficiency. At the same time, the transition layer can achieve feature down-sampling. Among the transition layers, the kernel size of convolution is 1 × 1, and it can realize feature channel dimension reduction. The 2 × 2 average pooling layer can realize feature down-sampling. The relevant parameter of channel dimension reduction is the compression rate θ that represents the dimension reduction proportion, the value of which is from 0 to 1.
We use DenseNet121 as our backbone model, which contains four dense blocks, as shown in Figure 5. The number of dense layers contained in each dense block is 6, 12, 24, and 16, respectively. The channel parameters of the input feature are set to 64. Through the operation of the convolution layer, the images input are down-sampled and dimensionality reduced.

2.3.2. Multi-Attention Module

Since the background of different images is different, it is crucial to make the network pay more attention to the main parts of the picture and extract more valuable characteristics.
We design an improved multi-attention mechanism based on CBAM [19], including two relatively independent sub-modules: channel attention [20] and spatial attention modules. The structures of these two attention modules are shown in the top picture in Figure 6. After operations such as convolution, the amount of information contained in different channels of feature maps is different. The channel attention mechanism can assign more weight to the channels that contain more helpful information. The spatial attention mechanism focuses on learning the position information of the picture and can assign different weights to different areas so that the network pays more attention to the functional area. In this paper, we add the multi-attention module after the first dense block. Furthermore, the composition order of the entire attention module is that the channel attention module is in the front, and the spatial attention mechanism is in the back. The module’s input is the feature map extracted by the first dense block. The weight matrix obtained by the channel attention module is multiplied by the original input to obtain the optimized output feature map and then input to the spatial attention module. After another weight optimization, the improved feature map is input to the transition module to continue the follow-up process.
In the channel attention module, as is shown in Figure 6a, we use LSE pooling layer [21] instead of Max pooling layer, which is calculated as:
x p = 1 r log 1 S i , j S exp r x i j
where x i j represents the activation value at the pixel. S = s × s is the total number of points in the pooling area S , and r is the hyper-parameter. We set the value of r to 2.
In the spatial attention module, as is shown in Figure 6b, we replace the original 7 × 7 convolution kernel with a dilated convolution [22] kernel of 5 × 5 and the interval of the convolution kernel is set to 1, which increases the receptive field and reduces the amount of computation.

2.4. Open-Set Recognition Module

The above algorithm might encounter two main problems after being deployed in actual application scenarios: (1) Some types of existing ingredients appear infrequently, and it is hard for users to collect enough pictures in a specific period time. If these kinds of food are added to the training set, the difference in the number of samples will be too large, which might cause model bias. (2) With the change of seasons, demand, and other factors, the types of food supplied every day are not static, and new categories of ingredients outside the data set will appear.
To solve the above problems in practical applications, we introduce an improved open-set recognition module [23] based on OpenMax [24] into our model. The main advantages of this module are as follows: (1) The feature extraction capability of the previous network is still valid, and we only need to add a small amount of computation. (2) It is relatively independent and we can flexibly select whether to use it or not, without affecting the original identification of categories. The main workflow is shown in Figure 7. It consists of two phases: the training phase and the prediction phase. During the training phase, we fit a Weibull distribution W x , λ , k [25] using the activation vectors extracted by the model for each category. In the prediction phase, it calculates the distance between feature vectors of the input image and existing categories, and judges if it obeys the known distribution W x , λ , k . If it is, the input image will be predicted to be a known category; otherwise, it is predicted to be ‘unknown’.
We have conducted many open-set recognition tests for the specific needs of multiple different application scenarios. The data differences in different scenarios are mainly the total number of categories and openness. Openness is used to measure the degree of an open set and its calculation equation is as:
O p e n n e s s = k u n k n o w n k a l l
where k u n k n o w n represents the number of new categories outside the data set and k a l l represents the number of categories in the existing data set. According to actual investigation, the openness in application scenarios is about 0.2–0.4.

3. Experiments and Results

3.1. Experiment Setup

Our experiment was carried out on a workstation with four NVIDIA GEFORCE RTX-2080Ti GPUs and four Intel Xeon Silver 4110 CPUs. The memory of each GPU is 11 GB. The operating system for training models is Ubuntu 16.4. The deep learning framework used in training models is PyTorch GPU.
After investigating the occurrence of daily ingredients in different scenarios, we select 170 categories with the highest frequency of occurrence for training to ensure that the number of pictures of each category is sufficient, containing 31,200 pictures. The average number of pictures in each category is 184. After data cleaning, we remove invalid images that are too similar and obtain a database containing 25,000 pictures. We randomly divide the images into the training set, the validation set, and the test set with ratios of 56.25%, 18.75%, and 25%. The number of pictures in the training set is 14,062, the number of pictures in the validation set is 4688 and the number of pictures in the test set is 6250. The remaining unused pictures can also be used for testing.
The optimizer we apply is stochastic gradient descent (SGD) [26] and the loss function is cross-entropy loss. The initial learning rate is set to 0.001, and it can be automatically adjusted during the training process according to the learning rate decaying method. The batch size is set to 8.

3.2. Performance Evaluation

The evaluation indicators applied in this paper are accuracy, recall, precision and F1 value. The calculation method is as:
a c c u r a c y = T R r e c a l l = 1 n i = 1 n T P i R i p r e c i s i o n = 1 n i = 1 n T P i T P i + F P i F 1 = 2 × r e c a l l × p r e c i s i o n r e c a l l + p r e c i s i o n
where T represents the number of correctly identified samples in the sample to be tested, and R represents the total number of samples to be tested. T P i represents the number of correctly identified samples of the i th class in the sample to be tested, R i represents the total number of samples of the i th class in the sample to be tested, and n is the total number of categories. F P i represents the number of n o n i t y p e samples identified as the i th type in the sample to be tested.

3.3. Classification Results

3.3.1. Non-Open Set Recognition

We first perform a classification test in non-open data set cases. The change curve of the loss and accuracy values of the training set and validation set with the epoch during the training process is shown in Figure 8. It shows that the loss gradually decreases with the increase of training times, and the accuracy gradually increases. After 70 epochs, the corresponding curves of each network tend to be flat. The accuracy of the training set is slightly higher than that of the validation set, indicating that there is no over-fitting during the training process. The accuracy of our algorithm is 97.72% for the training set, 96.29% for the validation set and, 95.90% for the testing set.
To further illustrate the effect of the attention module in our model, we use Grad-CAM [27] method to visualize the output of the last layer of our model after inputting a single image, as Figure 9 shows. This method can intuitively reflect the network’s attention to different parts of the input image. Figure 9a is the original input pictures. Figure 9b,c are the visualizing results of a DenseNet121 model and our approach, respectively. The comparison shows that the attention module can help our model focus on the major parts of the whole image, which means that our network can better avoid the influence of interference factors such as the background.

3.3.2. Open-Set Recognition

We count the order information of ingredients in several application scenarios and find that the openness is around 0.2–0.4. In this paper, we use several open-set conditions to test the effectiveness of our model. For example, in one of the conditions, there are 60 common categories. Among them, 6 categories are outside the data set and 9 categories are in the data set but with a low frequency of occurrence. We combine these 15 categories and consider them unknown categories. Another 45 categories of ingredients are used for training, including 2600 images. The openness of this condition is 0.25. We improve the open set module for our data by using a combination of cosine distance and standardized Euclidean distance to represent the category difference, and the weights of the two are 0.8 and 0.2, respectively. During the training process, the first round of distribution fitting is carried out after 30 epochs to ensure enough effective activation vectors. After that, we update the distribution model every 30 epochs. The test set includes all 60 kinds of food. When inputting existing food images, the output should be the correct category. Otherwise, the result is incorrect; the output should be “unknown” when inputting unknown food images. In Table 3, we summarize the performance of our model in different open set conditions and compare it with the threshold method. The threshold method only sets a likelihood threshold to determine whether the input image is an unknown category. Table 3 shows that among all the trials, the recognition accuracy of our model in open set conditions is 6.1% higher than the threshold method on average.

4. Discussion

To select a suitable model as the backbone network, we use a variety of models for experiments. The results are shown in Table 4. It can be seen that DenseNet121 achieved the best accuracy among all the models and with the smallest parameter amount. So we choose DenseNet121 as our backbone network.
Since DenseNet121 has four densely connected blocks, in order to get the best structure, we added attention modules in different positions and conducted multiple sets of experiments. The model situation and accuracy performance are shown in Table 5. The number after DenseNet121 represents the place where we added the attention module. For example, DenseNet121-1 means that the attention module is after the first dense block. ‘d’ represents our use of dilated convolution in the attention module and ‘lse’ represents our use of the LSE pooling layer. It can be seen from Table 5 that the attention module can improve the accuracy in most of the conditions and DenseNet121-1 has better performance, while the accuracy decreases after the attention module is added to all four blocks, even lower than the ordinary DenseNet121. A possible reason is that the excessive weight distribution mechanism reduces the effectiveness of the features extracted by the network. Hence, we continued to improve the network based on DenseNet121-1. Our model achieves the highest accuracy, 95.90%, among all the experiments. It can be seen from the number of parameters that the CBAM module is a lightweight module, and will not add too much complexity while improving the accuracy.
It is noted that under the same structure, the algorithm based on the shallower network model and the algorithm based on the deeper network model have little difference in accuracy performance, indicating that the model has the possibility of further compression and simplification. Since our algorithm is ultimately deployed on application devices, simplifying the model is of great significance for improving computing speed and saving costs. We will continue our work on model compression in the next stage.
We summarize the performance of our model and other related studies in Table 6. It shows that the scale of our database is very large. Although the data set of Hou et al. [30] contains more images, it only contains vegetables and fruits and a large part of the images come from the corresponding categories in Imagenet [31] data set. These images are not suitable for application scenarios. Our data set includes meat, seafood and other categories so the richness of data is much better. Our model’s recognition accuracy rate reaches 95.90%, which is much higher than other algorithms under the same order of magnitude of data. At the same time, the recognition speed of a single image is less than 0.2 s, which is suitable for deployment in application scenarios.
Our algorithm has been used in 4 application scenarios and the average accuracy reaches 92% according to statistics, which illustrates the generalization of our algorithm. However, the accuracy is 3.9% lower than the performance under the test set. We analyze that the reason for this result is that there are differences in the distribution of ingredients and pictures in different application scenarios. In the future study, we will add new images taken during the use of the algorithm to the data set and update the recognition model to further improve the accuracy and generalization performance of the algorithm.
In this paper, we apply open set recognition to the field of fresh ingredients recognition for the first time to solve the recognition problem of new categories in actual scenarios. The setting of openness degree and the total number of categories in the experiment is based on the actual situation of the scene. The recognition results in Table 3 show that the recognition accuracy is relatively high when the openness degree is small. And the accuracy is related to the total number of categories participating in the training and the specific category when the open set degree is the same. At present, there is still room for improvement in the accuracy of open set recognition in practical applications. We speculate that open set recognition may require a larger data set. In future work, we will also try other open-set recognition methods such as G-OpenMax [34] and CROSR [35].

5. Conclusions

Quick and efficient identification of ingredients is the first and crucial step in the smart catering workflow. This paper mainly solves the problem of high-accuracy automatic recognition of common fresh ingredients in China, which effectively improves efficiency and accelerates the promotion and popularization of smart catering.
In this paper, a large-scale fresh ingredients data set was constructed. Our data set includes most of the common ingredients in China, and has strong universality and value for research in this field. An end-to-end multi-attention-based convolutional neural network model for ingredients identification was proposed and achieved an accuracy of 95.90% in 170 kinds of classification. The result is better than other related research studies. The multi-attention mechanism and the adjustments we made according to the specific conditions of our data set enable the network to extract more valuable features and effectively improve the recognition accuracy. At the same time, our algorithm has been deployed in many catering-related enterprises. The average accuracy in different practical application scenarios is 92%, which illustrates that our algorithm has good generalization capability. To solve the problem of new categories in practical application scenarios, we apply open set recognition in ingredients recognition for the first time. An improved OpenMax module is added to the network and achieves an accuracy of 74.7% in the open set condition.
There have been more than 400 kinds of ingredients in our data set, and the total number of images exceeds 60,000. Due to insufficient images for some categories, we currently do not include all of them in the data set for training. We will continue to collect images of ingredients to enlarge the data set further. In future work, more kinds of attention mechanisms will be tested to improve the feature extraction ability of the network, so that the generalization capability and performance in different data sets can also be improved. In addition, we will continue the study on food quality inspection.

Author Contributions

Methodology, S.C.; Software, S.C.; Validation, S.C., R.L., C.W. and J.L.; Formal analysis, C.W. and K.Y.; Investigation, S.C., R.L., J.L., K.Y. and Y.L.; Resources, R.L., K.Y., W.L. and Y.L.; Data curation, R.L. and J.L.; Writing–original draft, S.C.; Visualization, S.C.; Supervision, R.L., C.W., K.Y., W.L. and Y.L.; Funding acquisition, K.Y. and W.L. All authors have read and agreed to the published version of the manuscript.

Funding

Zhejiang Key Research and Development Project (2019C03088), Zhejiang Province Commonweal Projects (LGG22F010012).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data was owned by Zhejiang Guotong Electric Engineering Co., Ltd. and are available from the authors with the permission of Zhejiang Guotong Electric Engineering Co., Ltd.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Min, W.; Jiang, S.; Liu, L.; Rui, Y.; Jain, R. A survey on food computing. ACM Comput. Surv. (CSUR) 2019, 52, 92. [Google Scholar] [CrossRef] [Green Version]
  2. Ji, S.; Zhang, C.; Xu, A.; Shi, Y.; Duan, Y. 3D convolutional neural networks for crop classification with multi-temporal remote sensing images. Remote Sens. 2018, 10, 75. [Google Scholar] [CrossRef] [Green Version]
  3. Market Research Report, Markets and Markets, Report Code: TC 7894. 2021. Available online: https://www.marketsandmarkets.com/Market-Reports/artificial-intelligence-market-74851580.html (accessed on 1 December 2022).
  4. Chen, H.; Xu, J.; Xiao, G.; Wu, Q.; Zhang, S. Fast auto-clean CNN model for online prediction of food materials. J. Parallel Distrib. Comput. 2018, 117, 218–227. [Google Scholar] [CrossRef]
  5. Jiang, Y.; Li, X.; Luo, H.; Yin, S.; Kaynak, O. Quo vadis artificial intelligence? Discov. Artif. Intell. 2022, 2, 4. [Google Scholar] [CrossRef]
  6. Chen, J.; Ngo, C.W. Deep-based ingredient recognition for cooking recipe retrieval. In Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 32–41. [Google Scholar]
  7. Christodoulidis, S.; Anthimopoulos, M.; Mougiakakou, S. Food recognition for dietary assessment using deep convolutional neural networks. In Proceedings of the International Conference on Image Analysis and Processing, Genoa, Italy, 7–8 September 2015; Springer: Cham, Switzerland, 2015; pp. 458–465. [Google Scholar]
  8. Herranz, L.; Jiang, S.; Xu, R. Modeling restaurant context for food recognition. IEEE Trans. Multimed. 2016, 19, 430–440. [Google Scholar] [CrossRef]
  9. He, Y.; Xu, C.; Khanna, N.; Boushey, C.J.; Delp, E.J. Analysis of food images: Features and classification. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 2744–2748. [Google Scholar]
  10. Nguyen, D.T.; Zong, Z.; Ogunbona, P.O.; Probst, Y.; Li, W. Food image classification using local appearance and global structural information. Neurocomputing 2014, 140, 242–251. [Google Scholar] [CrossRef] [Green Version]
  11. Farinella, G.M.; Moltisanti, M.; Battiato, S. Classifying food images represented as bag of textons. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5212–5216. [Google Scholar]
  12. Hoashi, H.; Joutou, T.; Yanai, K. Image recognition of 85 food categories by feature fusion. In Proceedings of the 2010 IEEE International Symposium on Multimedia, Taichung, Taiwan, 13–15 December 2010; pp. 296–301. [Google Scholar]
  13. Liao, E.; Li, H.; Wang, H.; Pang, X. Food image recognition based on convolutional neural network. J. South China Norm. Univ. (Nat. Sci. Ed.) 2019, 51, 113–119. [Google Scholar]
  14. Xiao, G.; Wu, Q.; Chen, H.; Cao, D.; Guo, J.; Gong, Z. A deep transfer learning solution for food material recognition using electronic scales. IEEE Trans. Ind. Inform. 2019, 16, 2290–2300. [Google Scholar] [CrossRef]
  15. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  16. Geng, C.; Huang, S.J.; Chen, S. Recent advances in open set recognition: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3614–3631. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
  18. Glorot, X.; Bordes, A.; Bengio, Y. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 11–13 April 2011; pp. 315–323. [Google Scholar]
  19. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  20. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Beijing, China, 20–24 August 2018; pp. 7132–7141. [Google Scholar]
  21. Pinheiro, P.O.; Collobert, R. From image-level to pixel-level labeling with convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1713–1721. [Google Scholar]
  22. Yu, F.; Koltun, V. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
  23. Bendale, A.; Boult, T. Towards open world recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1893–1902. [Google Scholar]
  24. Bendale, A.; Boult, T.E. Towards open set deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1563–1572. [Google Scholar]
  25. Weibull, W. A statistical distribution function of wide applicability. J. Appl. Mech. 1951, 18, 293–297. [Google Scholar] [CrossRef]
  26. Bottou, L. Stochastic gradient descent tricks. In Neural Networks: Tricks of the Trade; Springer: Berlin/Heidelberg, Germany, 2012; pp. 421–436. [Google Scholar]
  27. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
  28. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  29. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  30. Hou, S.; Feng, Y.; Wang, Z. Vegfru: A domain-specific dataset for fine-grained visual categorization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 541–549. [Google Scholar]
  31. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.F. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  32. Rocha, A.; Hauagge, D.C.; Wainer, J.; Goldenstein, S. Automatic fruit and vegetable classification from images. Comput. Electron. Agric. 2010, 70, 96–104. [Google Scholar] [CrossRef] [Green Version]
  33. Zeng, G. Fruit and vegetables classification system using image saliency and convolutional neural network. In Proceedings of the 2017 IEEE 3rd Information Technology and Mechatronics Engineering Conference (ITOEC), Chongqing, China, 3–5 October 2017; pp. 613–617. [Google Scholar]
  34. Ge, Z.; Demyanov, S.; Chen, Z.; Garnavi, R. Generative openmax for multi-class open set classification. arXiv 2017, arXiv:1707.07418. [Google Scholar]
  35. Yoshihashi, R.; Shao, W.; Kawakami, R.; You, S.; Iida, M.; Naemura, T. Classification-reconstruction learning for open-set recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4016–4025. [Google Scholar]
Figure 1. Examples of images from different sources. (a) Images from enterprises and schools taken by our device. (b) Images from market taken by mobile phones. (c) Images from the Internet.
Figure 1. Examples of images from different sources. (a) Images from enterprises and schools taken by our device. (b) Images from market taken by mobile phones. (c) Images from the Internet.
Entropy 25 00388 g001
Figure 2. An example of naming an image using our method.
Figure 2. An example of naming an image using our method.
Entropy 25 00388 g002
Figure 3. Examples of data augmentation.
Figure 3. Examples of data augmentation.
Entropy 25 00388 g003
Figure 4. The overall structure of our network. The green frame represents dense blocks, the red frame represents attention module, the blue frame represents transition layer and the yellow frame represents open set module.
Figure 4. The overall structure of our network. The green frame represents dense blocks, the red frame represents attention module, the blue frame represents transition layer and the yellow frame represents open set module.
Entropy 25 00388 g004
Figure 5. The inside connection structure of dense block.
Figure 5. The inside connection structure of dense block.
Entropy 25 00388 g005
Figure 6. The structure of the multi-attention module. (a) The channel attention module. (b) The spatial attention module.
Figure 6. The structure of the multi-attention module. (a) The channel attention module. (b) The spatial attention module.
Entropy 25 00388 g006
Figure 7. The workflow of the open set module. (a) The training phase. (b) The prediction phase.
Figure 7. The workflow of the open set module. (a) The training phase. (b) The prediction phase.
Entropy 25 00388 g007
Figure 8. The loss and accuracy changing conditions during the training process.
Figure 8. The loss and accuracy changing conditions during the training process.
Entropy 25 00388 g008
Figure 9. Visualization results of different network. (a) The input image of the model. (b) The visualizing feature map of DenseNet121 model. (c) The visualizing feature map of our model.
Figure 9. Visualization results of different network. (a) The input image of the model. (b) The visualizing feature map of DenseNet121 model. (c) The visualizing feature map of our model.
Entropy 25 00388 g009
Table 1. Statistics of the food data set.
Table 1. Statistics of the food data set.
Data SourceShooting EquipmentResolutionClassesNumber of Pictures
EnterpriseSmart Catering System800 × 6003208325
SchoolSmart Catering System800 × 60017537,051
MarketPhones1280 × 7207014,408
Internet/Uncertain638853
Table 2. Accuracy at different resolutions.
Table 2. Accuracy at different resolutions.
Resolution300 × 300350 × 350400 × 400480 × 480540 × 540600 × 600650 × 650720 × 720
Accuracy (%)72.3676.8977.4680.1080.4581.7379.3076.12
Table 3. Open set performance.
Table 3. Open set performance.
ClassImagesOpennessOur Model Accuracy (%)F1 (%)Threshold Accuracy (%)
157850.2572.470.265.2
157850.4070.269.464.0
4721000.2171.870.566.3
6026000.2574.772.969.3
Table 4. Comparison of different models.
Table 4. Comparison of different models.
ModelParameters (M)Accuracy (%)Precision (%)Recall (%)F1 (%)
Resnet50 [28]25.573.5174.3072.8873.23
Resnet101 [28]44.5577.2377.3777.6777.16
Efficientnet-B1 [29]7.8079.7579.8678.8079.02
DenseNet1216.9881.7381.5781.4681.44
DenseNet16126.574.4376.1673.0774.40
Table 5. Performance of DenseNet with different attention modules.
Table 5. Performance of DenseNet with different attention modules.
ModelParameters (M)Accuracy (%)Precision (%)Recall (%)F1 (%)
DenseNet1216.9881.7381.5781.4681.44
DenseNet121-16.9993.5591.2094.4192.75
DenseNet121-27.0292.6792.9192.4692.59
DenseNet121-37.1293.1993.5093.2293.31
DenseNet121-127.0376.4477.4176.7276.97
DenseNet121-237.1582.7282.7682.3282.40
DenseNet121-347.2586.3986.3085.8385.99
DenseNet121-12347.2985.8686.2686.0686.15
DenseNet121-1-d6.9989.0188.8588.9988.91
DenseNet121-1-lse6.9995.0294.4294.7894.51
DenseNet121-12-lse-d7.0387.4388.1388.3387.94
Ours6.9995.9096.3396.3596.33
Table 6. Comparison between our method and related work in ingredients classification task.
Table 6. Comparison between our method and related work in ingredients classification task.
AuthorMethodClasses NumberDatabase
Size
KindNormal Accuracy (%)
Rocha et al. [32]Feature fusion152633Vegetables, fruits95.00
Hou et al. [30]Bilinear pooling292160,000Vegetables, fruits83.51
Zeng et al. [33]Image saliency26/Vegetables, fruits95.60
Wu et al. [12]Transfer learning60/Vegetables72.45
OursAttention based
DenseNet
17031,200Vegetables, meat,
sea food, fruits
95.90
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, S.; Li, R.; Wang, C.; Liang, J.; Yue, K.; Li, W.; Li, Y. Attention-Based Convolutional Neural Network for Ingredients Identification. Entropy 2023, 25, 388. https://doi.org/10.3390/e25020388

AMA Style

Chen S, Li R, Wang C, Liang J, Yue K, Li W, Li Y. Attention-Based Convolutional Neural Network for Ingredients Identification. Entropy. 2023; 25(2):388. https://doi.org/10.3390/e25020388

Chicago/Turabian Style

Chen, Shi, Ruixue Li, Chao Wang, Jiakai Liang, Keqiang Yue, Wenjun Li, and Yilin Li. 2023. "Attention-Based Convolutional Neural Network for Ingredients Identification" Entropy 25, no. 2: 388. https://doi.org/10.3390/e25020388

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop