Next Article in Journal
The Association between Salt Taste Perception, Mediterranean Diet and Metabolic Syndrome: A Cross-Sectional Study
Previous Article in Journal
Lemon Balm and Its Constituent, Rosmarinic Acid, Alleviate Liver Damage in an Animal Model of Nonalcoholic Steatohepatitis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Current Developments in Digital Quantitative Volume Estimation for the Optimisation of Dietary Assessment

1
Clinical Nutrition Research Centre, Singapore Institute for Clinical Sciences, Agency for Science, Technology and Research (A*STAR), Singapore 117599, Singapore
2
Department of Biochemistry, Yong Loo Lin School of Medicine, Singapore 117596, Singapore
*
Author to whom correspondence should be addressed.
Nutrients 2020, 12(4), 1167; https://doi.org/10.3390/nu12041167
Submission received: 13 March 2020 / Revised: 18 April 2020 / Accepted: 20 April 2020 / Published: 22 April 2020
(This article belongs to the Section Nutrition Methodology & Assessment)

Abstract

:
Obesity is a global health problem with wide-reaching economic and social implications. Nutrition surveillance systems are essential to understanding and addressing poor dietary practices. However, diets are incredibly diverse across populations and an accurate diagnosis of individualized nutritional issues is challenging. Current tools used in dietary assessment are cumbersome for users, and are only able to provide approximations of dietary information. Given the need for technological innovation, this paper reviews various novel digital methods for food volume estimation and explores the potential for adopting such technology in the Southeast Asian context. We discuss the current approaches to dietary assessment, as well as the potential opportunities that digital health can offer to the field. Recent advances in optics, computer vision and deep learning show promise in advancing the field of quantitative dietary assessment. The ease of access to the internet and the availability of smartphones with integrated cameras have expanded the toolsets available, and there is potential for automated food volume estimation to be developed and integrated as part of a digital dietary assessment tool. Such a tool may enable public health institutions to be able to gather an effective nutritional insight and combat the rising rates of obesity in the region.

1. Introduction

In recent decades, overweight and obesity has become a global health concern with significant economic and social implications [1,2,3,4,5,6]. It has also led to the proliferation of many metabolic and lifestyle diseases including Type 2 Diabetes Mellitus. The World Health Organization reported in 2014 that 1 in 3 adults were overweight, and that 1 in 10 were obese globally [1]. In 2017, direct costs linked to obesity and its associated diseases in Singapore alone, were estimated by the Asian Development Bank Institute to be USD 5.05 billion, or 37.18% of the country’s total healthcare costs [7].
This increasing prevalence of over-nutrition, especially in the Southeast Asian region, has been attributed in part to rapid urbanization in the last few decades [2,8,9,10,11,12,13,14,15]. Urbanization is associated with a combination of dysfunctional food systems, an adoption of Western diets, increased psychological stress and sedentary behaviors, leading to unhealthy environments that contribute to the development of chronic diseases [7,13,16]. In countries such as Vietnam and Laos, exposure to urban environments have been associated with a three-fold increase in obesity [15].
To improve population-level dietary practices and address these impending public health issues, several countries are working on developing healthy, functional nutrition systems [1,2,12,17,18]. However, gathering accurate information required to identify the various nutritional issues has been a challenge for government institutions, researchers and dietitians [19,20]. The diets of Southeast Asians are considerably complex due to diet variety, consumption of composite foods and communal eating practices [12,21,22,23]. Furthermore, the validity and accuracy of traditional methods such as diet records and recall-based tools have been highly disputed. Under-reporting rates as high as 41% have been evidenced in some studies, and there is a need for technological innovation to improve the accuracy of dietary assessment, on both an individual as well as a population level [24,25,26,27,28,29,30,31,32,33].
There is a consensus among the Southeast Asian countries to develop stronger nutrition surveillance systems to provide greater insight into the nutrition situation, and facilitate the implementation of nutrition-focused policies [17]. Adopting a patient-centered approach is crucial in diagnosing and addressing nutritional gaps. Furthermore, the widespread availability of smartphone devices, computer vision technology and improved digital connectivity has opened doors to more precise methods of dietary assessment [34,35,36,37,38].
The aim of this paper will be to explore the technical and cultural hurdles that contribute to the difficulty in assessing diets on both an individual as well as a population level in the Southeast Asian region. The paper articulates the scope of current dietary recording methods; reviews the potential of newly available digital methods of dietary assessment; and considers the viability of their application to the Southeast Asian demography.
A literature review was conducted to identify current image-based digital technologies that assisted with estimating food volume. Relevant original research articles and reviews published between January 2008 and January 2020 were identified and were included for discussion in this paper. Briefly, the following string of search terms were used in Pubmed and IEEE Xplore Digital Library, with no language or other restrictions: “((image based) OR (food photo) OR (deep learning) OR (food image) OR (food photo)) AND ((food portion estimation) OR (dietary assessment) OR (food volume estimation) OR (calorie estimation) OR (food intake measurement))”. The electronic search was supplemented by manual searches through the reference sections of selected publications, as well as with linked articles that were found to have cited these particular publications.

2. Complexity of the Southeast Asian Diet

The significant dietary diversity in Southeast Asian countries is largely attributed to the many ethnic and cultural food practices, as well as the degree of past and present foreign influence in the region [23,39,40]. Being strategically situated along a major maritime East–West trade route, most countries in Southeast Asia were subject to some form of colonial governance for significant periods in the last few centuries. These factors have molded the cooking styles, taste profiles, as well as the ingredients available to each Southeast Asian country [23]. Chinese immigrants brought along dishes such as noodles served in a broth, curried, or stir-fried with a variety of ingredients; many different types of dumplings and steamed buns; stir-fried, braised and steamed vegetable, fish and meat dishes that pair well with rice [23]. Influence from the Indian subcontinent contributed to foods such as coconut-milk based curries, flatbreads and a myriad of spiced biryanis [23]. European traders and colonial rule brought along with them bread and other bakery products, pate, salads, as well as many types of vegetables and herbs like cassava, tomatoes and papaya into the region [23]. Many of these influences were adopted and integrated with local produce and flavors, resulting in significant variation throughout Southeast Asia. In more recent times, the wave of rapid globalization has also brought in a whole new set of flavors through the introduction of fast food into the region [23,41,42,43].
All Southeast Asian countries are plural societies characterized by the presence of a dominant ethnic majority and an array of ethnic minorities [23]. Therefore, ethnicity, culture and even religion has a pronounced impact on the choice of foods, types of local ingredients used, structure of meals and patterns of eating behavior. There are further distinctions between urban populations and rural villagers; between the wealthy and the poor; and between the educated and the less educated, and these factors greatly affect access, as well as choice of foods [39,41]. This level of diversity can make it difficult for consumption patterns and behaviors to be accurately defined on a population level [23,41,44].

2.1. Meal Settings and Eating Practices

Meals in Southeast Asia are generally communal in nature and sharing of food from central platters is the typical practice [23]. A common type of meal that demonstrates this practice is the consumption of rice paired with a variety of dishes such as curries, braised meats, steamed vegetables and soups [22,23,39,42]. These dishes are often shared between guests at the table and individuals pick food from the central platters onto their own plates or bowls to be eaten with rice. Family meals for most Southeast Asian ethnicities consist of different dishes laid out on a table at the same time, to be picked from as preferred by the diners. At formal events and functions, these meals are also frequently served in a buffet line, allowing diners to pick their preferred choice of food from a variety of selections. At some functions, dishes are served sequentially one after another at the dinner table [23].
The street food culture of Southeast Asia also facilitates the busy urban life. It serves an array of foods that can be easily purchased and consumed on the go, taken back home or brought to the workplace for consumption [14,42,45]. This can range from full meals such as soup noodles or fried rice, to mid-meal snacks such as meat skewers, sandwiches, wraps and dumplings, as well as an assortment of bite-sized local kuih (traditional Southeast Asian pastries and cakes) [23,45].
The purchase of food from these vendors is becoming more commonplace in recent years [42,46]. Many Southeast Asian foods involve complex spice and herb pastes that are time-consuming to prepare and broths that can take hours to simmer [23,42]. In the past, most of these meals were prepared at home but the increasing demands of the workforce have resulted in families having less time to make these authentic foods [45]. Increasingly, many urban families are choosing to either eat out at restaurants or food centers or to buy foods from street vendors to consume with rice at home [23,45]. In 2010, 60.1% of the general adult population in Singapore were found to eat out at least four times a week, and a 1990 survey in Bangkok showed that families spent about half their monthly food expenditure on foods prepared outside the household [23,42]. An increasing reliance on external food sources is a trend seen in many urban societies, and other rapidly urbanizing countries in Southeast Asia may also experience similar trends [14].
From a nutrition standpoint, the vast variety of food sources can make it challenging for individuals to have an in-depth understanding into the types and proportions of ingredients used. Aside from the different recipes used by chefs and establishments, portion sizes can also vary significantly between sources. This dynamic food landscape can make it difficult to define, much less identify, a standardized portion size and can complicate the matching of a food to an item or items in a nutrition table.

2.2. Personalization of Meals

Many Southeast Asian foods involve a level of personalization. For example, noodle vendors in Malaysia or Singapore would ask customers for their preferred noodle type; if they prefer their noodles to be served dry or in a broth, and if they have a preference for chili or other sauces. Noodles from other Southeast Asian regions such as Vietnam and Thailand have similar features: Vietnamese rice noodles, also known as Pho, have platters of vegetables, herbs and condiments placed on the table for customers to help themselves and Thai boat noodles often have a tray of various dry and wet condiments placed at the table for diners to tailor their meals to their own personal tastes [23].
Economy rice is a type of dish consumed by many of the major ethnicities in Southeast Asia, and there are a variety of terms used to describe the meal such as Mixed Vegetable Rice, Cai Fan, Nasi Campur, Nasi Padang or Banana Leaf Rice. The meal typically involves the selection of 2 to 4 dishes from an array of 10 to 30 dishes to be eaten with rice [23]. Serving sizes for each of the dishes can be highly variable depending on the generosity of the stall attendant. Customers can also choose to have their rice doused with various types of curries or sauces, changing the overall profile of the meal to their liking [23]. This degree of customization is also prevalent in the many communal meals that Southeast Asians partake in. Dishes are scooped from shared platters and placed into an individual’s own bowl or plate, to be eaten in the order and quantity of their choosing [23,44].
Personalization and self-regulation is an integral part of Southeast Asian cuisine and can make assessment of population-wide consumption patterns problematic [40]. This can confound generalized assumptions and mask specific nutritional issues that may be faced by certain population groups, making it difficult for population-wide nutrition policies to be implemented effectively.

3. Limitations of Current Dietary Recording Methods

Diet recording is a prospective method of measuring dietary intake that requires respondents to log intake information over a period of time. It is also used extensively in many aspects of nutrition assessment [28,33]. This can come in the form of food diaries, weighted food records, or duplicate portion measures. This is unlike other tools such as the Food Frequency Questionnaires (FFQ) or the 24-hour diet recall, which are tools that rely on obtaining retrospective information about foods consumed in the recent past [28,33].
Diet records are considered to be more detailed than recall-based methods and are preferred as a means of dietary assessment for various reasons [24]. They reduce the degree of recall bias by not being dependent on the subject’s ability to remember consumed foods, as well as the interviewer bias given the need for detailed probing with recall-based methods [33]. Respondents are also trained before they commence the diet recording, and can thus be made aware of the necessary details to pay attention to. However, there are several limitations to the use of dietary records.
Dietary records require respondents to record precise amounts of food, often over a few days, and sometimes over weeks [24,32,47,48,49]. This places a huge burden on respondents to ensure that the information is accurate and precise. Respondents are required to be highly motivated and to commit the time and effort needed to complete lengthy records. As a result, high rates of dropout have been evident after three consecutive days of recording [27,33].
In addition to being tedious, the inherent act of recording one’s diet has been known to change a person’s dietary practices, and research has shown shifts towards healthier foods and reduced energy intake in participants after three consecutive days of recording [24,33,47,50]. To reduce the amount of effort required, respondents have also been noted to progressively record less detail of their meals over the duration of the recording period, sometimes omitting more complex items, and even changing their diets to favor simpler foods [27,32,47,48,49]. Increased dietary awareness as a result of pre-study participant training may also affect a person’s dietary practices and present an inaccurate assessment of their habitual consumption patterns. The fear of being criticized for poor eating habits may also lead respondents to omit less desirable foods and/or over-quantify healthier foods. This is known as social desirability bias and can affect the integrity of the record [24,28,32].
Poor literacy and education levels can compound a lack of clarity and attention to portion sizes and ingredients used, in turn affecting the accuracy of the dietary records [24,27,32,51,52,53]. Despite the fact that most prospective studies begin with a degree of training for respondents, the effectiveness and receptiveness of the training is dependent on the participants’ abilities to grasp and execute these concepts [24,32,51,52]. This is further amplified by the complexity of Southeast Asian diets, and the reliance on food consumed outside the home can make it difficult for individuals to be aware of all the ingredients in the food [32,44]. Mixed dishes are featured heavily in Southeast Asian cuisine and are known to be especially difficult to quantify given the different proportions of ingredients used in different settings [54]. The shared or multi-course meals common in Asian demographics may also further complicate the quantification of consumed foods, and respondents may leave out many details when attempting to record their intake [44].
Significant errors can also occur at the point of coding, especially in the case of open-ended diet records [33,51,52,53,54,55]. These errors include: misinterpretation of portion sizes; unclear or inconsistent food names, particularly in the case of ethnic foods; or lack of detail with regards to food preparation methods and specific ingredients [54]. All of these factors require a degree of professional judgement on the part of the coder when the food records are interpreted, and this can differ between nutrition professionals [54]. Guan et al. found a 26% discrepancy rate when food record entries were verified against their respective FoodWorks analyses entered by coders, indicating that a certain degree of subjective extrapolation was required to match paper records with information available in the database [54]. The choice of analysis software and nutrient database can also add another layer of complication, with different tools having different values for similar items. As a result of these many factors influencing the outcome of dietary assessments, differences in total calories extrapolated from food records may be evident between dietitians or researchers within a team.
As mentioned earlier, dietary records require large amounts of resources per respondent, and are difficult to implement in large scale research, or public health settings [28,51]. Despite 7-day Diet Records being considered one of the better-regarded methods for dietary assessment, the number of participants that researchers are able to recruit is limited not only because of the difficulties faced by respondents, but due to the time-consuming nature of participant training and data processing [33,47]. At the end of studies, researchers may be presented with huge amounts of data to transcribe, and depending on the dietary complexity of the target group and the length of the study, this can put a significant strain on resources [54,56].

4. Digitization of Dietary Collection Methods

4.1. Feasibility of Going Digital

Advances in technology have alleviated some of the shortcomings of traditional dietary recording methods. Access to the internet, the availability of mobile phones, and the integration of cameras into the general population have expanded the toolset for dietary assessment [57,58,59,60].
Southeast Asia has shown a rapid growth in the use of technology and is a potential market for mobile healthcare apps, with internet usage in the region increasing from 12% in 2007 to 44% in 2017, mobile broadband subscriptions increasing from 1.3% in 2009, to 85% in 2017 [34,61]. These healthcare apps may require high initial costs to develop, but can greatly reduce the effort and uncertainty required for the collection and handling of data. This alleviates some of the burden of both respondents and researchers alike and improves the feasibility of dietary records in larger populations [24,51,52,61].
Currently, there are a multitude of digital dietary recording apps available [50]. The earliest forms began with simple text-based platforms that allowed users to type in the names and quantities of consumed foods [53]. Improvements to online communication platforms and smartphones have allowed apps to evolve beyond just text-based systems to include integrating image-based dietary recording, health coaching and dietary consultation into the functionality of the apps [38,62,63,64]. Further advances in computer vision and smartphone on-board processing capacity have also recently opened the door to image recognition and segmentation capabilities, indicating a potential for foods to be automatically identified and categorized as required for further processing [65].

4.2. Benefits of Digital Healthcare Solutions

One of the primary drivers for the use of digital solutions is its ability to personalize and individualize healthcare. This idea of personalized nutrition involves shifting the delivery of nutrition and lifestyle recommendations from a population level to an individual level, enabling greater adherence to nutrition goals and more effective behavior changes for users [58,59,66]. Digital applications enable healthcare professionals, researchers and even peers, to be in contact with subjects more frequently and for longer periods of time. This leads to more representative data of a person’s long-term diet coupled with an increased level of engagement, social support and feedback, allowing the delivery of more successful interventions [59,67,68]. By being more integrated in the lives of consumers, healthcare apps can assist with the translation of health education into application. Apps can provide nutrition education and allow for the setting of dietary goals. Many of these apps have been well received by consumers and facilitate improvements to the predictors of behavior change such as knowledge, self-efficacy and motivation [37,60,67,68,69,70,71,72]. Being able to be in contact with a healthcare professional also provides a layer of support whenever required, and this has been shown to improve engagement and satisfaction with the lifestyle change [38,58,60,62,63].
Using digital solutions also reduces the burden placed on respondents and healthcare professionals [53,60,67,73]. Many studies have found that users prefer the use of digital dietary recording over traditional paper methods for ergonomic and practical reasons [27,28,37,38,52,58,60,62,67,71,72,74]. Smart devices are highly integrated into modern lifestyles and are less intrusive to daily routines as users do not need to carry around an additional item such as a journal to record their diets with. In addition, these applications can also actively remind users to input their meal entries when required [50,70,75]. The ability to easily take photos of food also reduces the amount of textual input required by users. This makes the process less subjective and reduces the reliance on the respondents’ ability to recall or describe [24,28,32,33,63]. The simpler process requires less time and effort on the part of respondents, and therefore is less likely to impact research-related behavior changes that are often associated with tedious dietary records [24,27,32,47,48,76]. From a healthcare standpoint, variability of interpreted dietary results implicated during the coding process may also be substantially reduced with the automation of this process, potentially minimizing the chance for human error and standardizing the task of data extrapolation [55].
Organizationally, digital touchpoints are also expected to significantly reduce healthcare costs [33,56]. By enabling automated data capture and processing, the time required for the interpretation of diet records can be mitigated, decreasing the administrative load on healthcare professionals and researchers [33,55,56]. Patients can also be assessed and followed up remotely, reducing patient traffic in crowded healthcare institutions and reducing the geographical limitations faced by travelling clinicians in rural areas [53,56,73].
The level of personalized engagement and resource magnification that digital nutrition applications have the potential to provide is unprecedented. Adoption of these new forms of data collection could allow for nutrition data to be gathered in more detail, across larger populations, blurring the lines between individual- and population-level assessment methods. Given the increasing integration of digital platforms and devices into all walks of life, policy makers, researchers and dietitians should consider these avenues as a means of facilitating better health outcomes.

4.3. Limitations of Digital Healthcare Solutions

That being said, care must be taken to ensure successful and responsible adoption of digital solutions. Given that many apps have not been validated and may base recommendations and information on incorrect nutrient databases, both consumers and health professionals need to be wary of the accuracy of mobile applications [71]. Braz et al. assessed 16 apps in Brazil and found that despite most of them receiving high favorability from consumers (81.25% of apps receiving four stars and above), the energy values provided in many of the apps deviated by an average of 34% when compared to an officially regulated nutrition database, with discrepancies as high as 57% [71]. Apps that allow for new database entries to be added by consumers may be subjected to an additional layer of inconsistency. In one particular study, there was significant variation in caloric data between English-speaking and Portuguese-speaking users of the nutrition app MyFitnessPal because Portuguese entries were created by users and not by the company [70]. For these technologies to be accurate, these databases need to be regulated be professional health bodies that can provide credible and authoritative sources of evidence-based health and nutrition information.
The lack of regulation for health and lifestyle apps may be a concern with regards to the quality and efficacy of future apps. Issom et al. reviewed diabetes-focused mobile apps between 2010 and 2015, and found that only 8% of 53 publicly available apps were Food and Drug Administration (FDA)-approved; 4% were approved under the Health Insurance Portability and Accountability Act (HIPAA); and 6% obtained a CE marking which indicates conformity with health, safety and environmental protection standards [68]. Apps that focus solely on diet and diet recommendations are not considered mobile medical applications by the FDA and thus do not need to abide by their quality-control guidelines [77].
There are also some respondent burdens that may not be as easily resolved with the use of technology. Diet recording apps do not fully overcome methodological biases that traditional methods have with self-reporting [24,28]. Individuals have been shown to still have the tendency to under-report intake data, omit certain food choices to avoid social judgement or alter their diets when they are aware of an upcoming survey [78]. These issues could confound results in both individual- as well as population-level studies into nutrition and may be even harder to detect remotely in the absence of face-to-face interaction.
In the interest of data accuracy, current commercially available apps are still unable to determine absolute food volume and by extension, nutrient content of foods. Instead, these applications rely on user input or selection from a list of common serving sizes of identified foods for use in deriving estimations of nutritional values [65]. As discussed above, users may not be able to accurately quantify food and serving sizes have been shown to vary considerably even for similar foods [46]. To eliminate these discrepancies and reduce the variance associated with the quantification of food intake, an automated tool for food volume determination will eventually be needed to advance the field [28,51,79,80].

5. Recent Developments in Food Volume Estimation

In the field of automated vision-based dietary assessment, it has been established that images need to undergo a few steps of analysis: segmentation, classification, volume assessment and nutrient derivation (Figure 1) [81]. Segmentation uses computer-vision tools such as Grubcut to define the borders and separate the respective foods in the image; classification uses deep learning principles such as Convolutional Neural Networks (CNN) to identify the foods; volume assessment involves the determination of the volume of the identified segmented foods; and lastly, nutrient derivation matches the assessed volume with density and nutrient datasets to calculate the nutrients or calories contained within the foods in the image.
At present, the steps that involve segmentation, classification and nutrient derivation are well developed with well-prescribed methods that can perform the task well, subject to the integrity and precision of their respective training datasets [35,65,81,82,83,84]. However, the current state of the art of food volume estimation is not deployable into commercial apps due to many gaps and technological issues. With the developments in optics, computer vision and deep learning, this research area has seen many improvements that could hasten its eventual integration with the other aspects of dietary assessment. Some of the newer technologies have been summarized in Supplementary Table S1 and we will be reviewing them below.

5.1. Scale Calibration Principles

Volume estimation requires for scale references to be established within an image such that dimensional coordinates of the target object can be determined. Two-dimensional images are a collection of pixels and do not provide any indication of the relative sizes of the objects pictured in the image. Researchers have explored various methods of scale calibration, but they can be broadly categorized into whether a system requires an additional physical reference marker, also known as a fiducial marker, or if they are able to extrapolate scale via other digital means [85].

5.1.1. Physical Fiducial Markers

Physical fiducial markers can come in many forms and serve as a reference point for dimensional calculations. The earlier variants were specialized equipment distributed by the researchers: a study by Yue et al. used standard plates of known diameters so that all food could be estimated relative to the size of the plate [86]; Jia et al. explored the positioning of a placemat outlined with special markings placed underneath the food [87]; and many studies utilized various forms of colored and/or chequered cards of known size placed next to food items to serve as a reference for scale calibration [75,88,89,90,91,92,93,94,95,96]. These methods can be useful in controlled environments such as in hospitals, schools or canteens, where the fiducial markers can be printed on trays, or standardized crockery can be used to provide a convenient degree of scale calibration within the pictures. However, for participants who are required to use these methods in free living conditions, being required to carry around an additional item may be inconvenient and may lead to poor compliance with study protocol [85].
Researchers have also experimented with using foods or tableware present in the picture as a basis for scale calibration. Based on the assumptions that Japanese rice grains are similar in size, and that most Japanese meals are frequently consumed with a bowl of rice present, Ege et al. proposed an ingenious system that utilized the grains of rice in the photo as the markers for volume estimation [97]. Though relatively accurate, with 84% of estimated results incurring less than 10% relative error, the method can be quite restrictive and culturally specific. This will unfortunately not work in other contexts where different types of rice are consumed, or when rice is not present in the meal. Akpa et al. also explored the use of another culturally-specific fiducial marker in the form of chopsticks [98]. By placing one chopstick on top of the bowl and the other on the table, they were able to deduce the estimated depth of the bowl from the differences in perceived length between the two chopsticks, obtaining promising results with a low relative error of 7%. These technologies could have much potential if their capabilities are further expanded to integrate multiple types of commonly available cross-cultural foods or tableware as markers.
In the interest of ergonomics and usability, common everyday items of a known height such as a one-yuan coin and a credit card, have been trialed [91,99]. More personalized systems have enabled the use of a user’s thumb size to be calibrated for use as a fiducial marker in inferring scale. Users are then instructed to place their thumb next to their food when capturing images, allowing for a convenient marker unique to every individual user [100,101].

5.1.2. Digital Scale Calibration

Given that the above methods require an additional step on the part of users and can affect compliance and adherence to a tool, researchers have experimented with alternative methods that circumvent the need for a physical fiducial marker. Some of these new tools involve the use of modern computer vision technology. This includes Stereo Vision, Structured Light, Augmented Reality and Depth Sensing.
Subhi et al. explored the use of eyewear outfitted with twin cameras for stereo imaging [102]. As the cameras were positioned at a pre-determined distance apart from each other, object dimensions could be calculated based off the positions of key features identified in the pair of images [102]. Shang and Makhsous et al. utilized a smart phone attachment that projected a structured light system consisting of multiple laser dots onto targeted foods [103,104]. This created a visible set of coordinates on the food from which distance and the food’s 3D model could be determined. Similarly, an extension of experiments by Jia et al. with the colored plate marker was one with the use of an elliptical ring projected by an LED spotlight [86]. These tools, though circumventing the need for a physical marker, required specialized equipment to be carried around and set up for the scale calibration to work. Hence, they may be inconvenient for users if used in free-living conditions.
Tanno et al. and Yang et al. explored the use of Augmented Reality (AR) to enable volume estimation without the need for a physical fiducial marker [105,106]. In a study by Yang et al., the phone was placed on a flat surface, which allowed for a “table cloth” with grid markers to be projected into virtual space to serve as reference points for volume estimations [106]. Tanno et al. acquired the 3-dimensional coordinates of the world via the Apple ARKit framework, allowing for relative distance of reference points on the food object to be determined and quantified [105].
The increasing availability and accuracy of depth sensing technology in recent years has enabled the ability to detect form and shape and to establish the scale of objects beyond the limits of red/green/blue (RGB) images. Depth cameras such as the Microsoft Kinect or Intel RealSense have been used in fields of engineering and medicine, and are now being integrated into the field of nutrition [107,108,109,110]. Modern smartphones are also being outfitted with multiple cameras that also allow for stereo vision and depth perception. Given the ability to detect distance accurately, these modern methods using depth perception have enabled researchers to circumvent the need for a fiducial marker entirely [107,108,109,111].

5.2. Volume Mapping

Once the scale of the objects within the images has been determined, geometrical landmarks on the food items can then be established. This serves as the basis for volume to be extrapolated.

5.2.1. Pixel Density

As shown in Figure 2, Liang and Li utilized a method of volume estimation by determining the pixel of a One-Yuan coin and compared it to the amount of pixels taken up by the target food [99]. Geometric measurements were based off images taken from the top and side angles. The volume was then calculated with the use of one of three formulas, depending on whether the food was ellipsoidal, columnar, or irregular.
Zhang et al. performed volume estimation through a processing of captured 2D images by calculating the number of pixels per segment of food [112]. Though the authors commented that the counting of pixels led to a “reasonably good estimate”, this was not quantified in the paper. Okamoto et al. employed a similar method which compared the number of pixels occupied by foods in a 2D image to a reference object. Calories were then directly calculated with quadratic equations established with training data and achieved a relative error of 9%–35% for the three test foods [113].

5.2.2. Geometric Modelling

Geometric modelling is the use of geometric shapes of known or easily calculable volumes such as prisms, ellipsoids, cylinders and rectangles. These shapes are projected and fitted onto identified, segmented foods in the presented image, allowing for food volume to be estimated.
Subhi et al.’s eyewear-mounted stereo cameras allowed corners of food to be identified with the use of an edge-detection algorithm. This allowed a boundary cube to be defined by the detected points and projected into 3D space [102]. This method, however, resulted in empty spaces being factored in, leading to overestimations of volume, especially in the case of irregularly shaped objects. Volume derived from this geometric model proved to be a relatively accurate option, achieving a relative error of 2% to 13% [102].
Researchers have utilized both user- and computer-assisted methods to map geometric models resembling food objects to serve as a basis of volume estimation. Woo et al. experimented with the use of prismatic and spherical models; Chae et al. experimented with the use of cylindrical and solid, flat-top models; and Jia et al. used a series of different shapes, including a spherical cap that could mimic the volumetric outline of foods piled on a plate (Figure 3A) [75,90,114]. These shapes were projected over foods and used as a basis for calculation of objects such as oranges (spherical), cups of beverages (cylindrical) and scrambled eggs (prismatic).
With the virtual tablecloth projected by their AR technology, Yang et al. allowed users to align a cube of known volume next to target foods for comparison (Figure 3B) [106]. This cube was able to be scaled to various volumes and allowed users to manually estimate the volume of the food against the volume of the cube. However, this study only provided a cube as a reference shape and hence, smaller, irregularly shaped foods were more challenging to measure.
With the aid of Apple’s ARKit framework, Tanno et al.’s proposed method allows users to use iPhone devices to project points into virtual space [105]. This allowed users to identify corners of foods, where a boundary box could be drawn. Quadratic equations were then applied to derive calories from the calculated boundary box. This method was innovative but restrictive given that the boundary box and quadratic equations used may also encounter problems with irregularly shaped foods.

5.2.3. Machine and Deep Learning

The use of deep learning principles has been experimented with in the field of volume estimation, primarily involving the use of network architecture such as Convolution Neuro Networks (CNN). CNN is a system that identifies and recognizes similarities in images, and can be used to amalgamate key volume, caloric and classification information of various types of foods [115]. This then allows these metrics to be predicted when new food images are supplied.
Though not specifically targeting food volume, Ege et al. proposed a similar system that directly estimates calories based on ingredients and cooking directions found on various recipe websites [116]. The researchers created a calorie-annotated food image database that calculated calories from the ingredients listed in the recipe cards, and corresponded this information with the food images provided in the recipe listing. The relative calories of newly input foods are then estimated based on their similarities with the food images found in the database.
Isaksen et al. attempted a model that utilized machine learning for the entire assessment process of segmentation, classification, weight extrapolation and calorie derivation [117]. The training dataset consisted of images taken from the FoodX dataset created by researchers at the University of Agder. These training images were annotated with weights and nutrient values, and a ruler was placed beside photographed foods for scale. Though the system was able to successfully achieve every step of the process, it was inaccurate and incurred an average error of 62% [117].
Chokr M et al. experimented with a similar system for fast foods [118]. However, instead of obtaining caloric and volumetric information from ingredients listed in recipes, a dataset of 1000 images of six different categories of food were sampled from the Pittsburgh fast-food image dataset. The respective sizes and calories were annotated on the images with information obtained from online restaurant websites or nutrition databases. These images were then used for the training of the system. This system achieved a mean absolute error of only 9% with test images taken from the same dataset. However, the dataset was taken from a collection of 11 popular fast-food restaurants and the food contained a look that was largely similar to each other and may not have presented the same degree of a challenge as if the photos were taken in free living conditions [119].

5.2.4. Depth Mapping

Depth mapping is a representation of the area occupied by an object in 3D space through a projection of voxels (volume pixels) [110]. This technology presents opportunities in terms of food volume measurement given the precise amount of detail that can be captured on irregularly shaped objects. A depth map of an object can be determined by various methods such as stereoscopic vision, structured light, depth sensing or deep learning methods. As mentioned earlier, these optical approaches are able to determine the relative distance of objects within the picture allowing for the 3-dimensional surface area of food to be calculated in pictures without the need for a fiducial marker [108,111]. The height is defined based on distance from the relative plane to the identified Y-axis coordinates and the volume can be calculated respectively.
Dehais et al. utilized the technique of pixel matching with a pair of stereo images taken from above food to extrapolate vertical depth data with the aid of a chequered fiducial marker [89]. Matching of the positional variations of geometric landmarks between the two images allow for a voxel depth map to be constructed, and consequently, food volume can be determined (Figure 4). This method was relatively accurate, with a mean absolute percentage error (MAPE) between 7% to 10% on tested foods [89].
Makhsous et al. proposed the use of a mobile phone outfitted with a structured light attachment for food volume analysis [104]. This structured light system projected a series of dots onto food. Based on the distortions caused by the structures of the food, a 3D depth map can be plotted. Though the method had a low overall percentage error of 11%, it was not ergonomic for users. It required that users carry around an additional attachment for their smart phones, as well as taking a full 360-degree video of food that they would like to assess [104].
Ando et al.’s method utilized the multiple cameras on the iPhone X to produce depth images [111]. With the RGB-Depth (RGB-D) images captured by the device, he was able to collect information on the 3-dimensional structures of food allowing for more accurate volume assessment compared to other forms of 2D image analysis. The relative error ranged between 1% and 7% for the three types of food items that were calculated [111]. However, there are some limitations to this method in that it only takes into account the top surface of the food, given that any food area beneath the visible surface is occluded from view. All volumes below the visible surface of the food, to the reference plane, will therefore be included in its calculations by the system [111]. This may present a degree of noticeable inaccuracy in volume calculations in cases where food is positioned on sloped surfaces such as in a bowl, items that are overlapping each other, or in cases of irregularly shaped foods such as chicken wings.
Evidently, reducing the number of images required to be taken by users would be ergonomically ideal, but this can lead to inaccuracies in quantifying volume due to occlusions that are not visible in a single-view image [108,109,110,120]. To circumvent this issue, researchers have extended the use of depth sensing technology to integrate the aspect of deep learning, allowing 3D depth maps of food objects to be predicted based on visible surfaces [108,109,110,120]. This model works on the assumption that sufficient images and training will allow the system to understand the context of the scene, allowing for camera viewing angles, depth value of food objects, and occluded regions to be extrapolated from the images [108,109,110,120]. Unlike previous applications of deep learning in food volume estimation that relied on relative estimations, this provides an avenue for more absolute volume calculations to be performed.
Myers et al. trained a CNN-based system with multiple RGB-D images taken of both real food and plastic food models to recognize and predict depth maps [110]. When 2D RGB photos of fully plated meals were tested with the system, it was able to generate a voxel depth map based on the visible food surfaces. Though the researchers were not able to achieve optimal accuracy with some test foods, some of which encountered absolute volume errors of up to 600ml, the technology showed promising results and could be a suitable way to extrapolate data from occluded sections of food images.
Christ et al. developed a system that worked on similar principles, but instead of deriving absolute food volume, the system was designed to assist diabetics with estimating servings of carbohydrates, referred to in the paper as bread units (BU) [120]. This was done by training the system with a set of RGB-D images that had been annotated with the corresponding number of contained BUs by human experts. Depth maps were able to be accurately predicted from 2D RGB images fed into the system, achieving an average error of only 5% [120].
More recently, Lo et al. trained a separate system with an extensive 3D food model database that consisted of 20,000 pairs of depth images per type of food [108]. These images were taken from opposing views and allowed the system to be familiar with occluded angles. When new single-view images were supplied to the system, the system was able to synthesize the occluded side in 3D space based on the training information, therefore allowing the full 3D mesh and volume to be determined (Figure 5) [109]. In two separate studies, researchers were able to determine food volume with promising results, showing an average accuracy of 98% and 92% respectively [108,109]. However, in free-living conditions, foods of a similar category may vary in shape, and can be difficult to predict unless a 3D-model food database of sufficient size and complexity is developed.
Despite the advances in deep learning and computer vision technologies, the practicality of the approaches must also be considered. To be able to effectively utilize the application of deep learning, the methods discussed above require considerably larger datasets, ranging from 100 to 20,000 images per individual food type [108,109,110,120]. As such, the development of datasets will be exponentially tedious with the increasing number of food types and cuisines considered. A recent review by Zhou explored a few key limitations on the current applications of deep learning in food [115]. For the technology to be applicable for general public use, thousands of foods will need to be photographed, annotated and consolidated into a central database. Tapping into openly available resources such as the internet, social media and volunteers for data collection may be an appealing solution, but researchers will run the inevitable risk of collecting inconsistent and inaccurately labelled data which may further complicate the process. The use of extremely large databases will also require significant storage space and processing power, potentially requiring the use of cloud resources and remote processing, restricting their use in an offline setting. These barriers will need to be overcome if deep learning is to be used as part of the volume estimation process.

5.3. Database Dependency

As with current dietary assessment practices, the accuracy of the database used is quintessential in ensuring the precision of the final derived result [20,32]. This could be problematic in the Southeast Asian region. Given the vast array of different local foods and delicacies, different countries have their own independently managed nutrient databases, and there is limited comparability between countries [121,122,123,124]. There are variations in the methods used to evaluate nutrients, and some databases may not include the full range of nutrients [122,124]. Variations in the agricultural origin, processing steps, as well as recipes of available foods can also contribute to significant disparities in nutrient values [121,124].
Density is an important factor in image-based assessment that needs to be considered in the conversion of food volume to calories [125,126]. Most large-scale nutrition databases do not include density as a factor in their entries, instead relying on mass as a comparative unit of measurement [125,126]. The common way of converting weight to volume involves the use of standard references such as typical sizes of fruit or household measures such as cups or tablespoons. However, research has proven this method to be largely inaccurate with significant differences found in 80% of tested foods [125,127]. At present, the most extensive food density dataset is maintained by FAO, but many entries are single-ingredient foods as opposed to composite meals that would be common in real-world scenarios [128]. Furthermore, there is limited representation of Asian foods, which makes the translation of such information to the Southeast Asian databases difficult.
Even if density factors are fully established, there may be limitations with nutrient determination due to the differences between absolute and bulk density [126]. Xu et al. commented on the difficulty of measuring food items such as salad leaves given that their calculable volume from a single-angle image will not take into account the air pockets and spaces between the leaves [129]. Breads that are baked with longer proofing times can also be less dense than their less-leavened counterparts; sauces and stews that are reduced or have syrups added may be of a much thicker consistency, and therefore denser than water or oil of the same volume. These discrepancies in volume measurements can lead to significant inaccuracies when used in deriving nutrient calculations.
For digital food volume measurement techniques to be successful, current databases will need to be evaluated and improved upon to ensure that accurate and appropriate information can be calculated [20].

6. Application to the Southeast Asian Consumer

Given the increasing adoption of internet and smartphone use in the Southeast Asian region, these personal devices may appear to be a favorable way to interface with users. To make the technology as ergonomic as possible in order to improve compliance rates, the number of steps and burdens on the respondents should be minimized. Applications that require users to carry around additional accessories or fiducial markers as part of the data collection process may also be less favorable to users. Applications should also limit the amount of required user inputs so as to make the process less frustrating for users. It should also be a tool that allows data to be derived from a single food photo. An ideal point to work towards would be to allow users to take a shot of their food from any particular angle without any additional restrictive criteria.
Though the use of personal smartphone devices could be a convenient means of delivery for some, the low SES population groups that are most vulnerable to the impact of poor nutrition may not be able to afford the most advanced technologies. If food volume estimation is to be integrated into current digital applications for effective public health use, developers will need to be wary that their requirements do not outstrip the technological capacity of the average consumer device. Careful selection of the appropriate volume estimation technologies is also crucial as auxiliary systems such as image and nutrient databases will have to be developed for applications to be successful, and the use of advanced methods may limit the usability of the system.
The act of capturing a photo of all consumed meals may be a challenge for Southeast Asian consumers. Communal meals or banquets may make the act of recording dietary intake difficult [44]. Foods that one chooses to consume are generally picked out from the sharing platter just before eating. Additionally, piling a variety of food on one’s own plate is considered rude in many social settings. Foods at some Chinese banquets that are brought out sequentially throughout the course of the meal may require individuals to take multiple photos throughout the course of the meal.
Certain types of foods may be difficult to differentiate if they are based solely on a single image. Foods such as curries or soups may be opaque, and when served in bowl, could obscure many details within the dish. Sauces that are poured over rice and breads may also be soaked up and become difficult to quantify. Furthermore, soups, sauces and curries can have a large variance in nutritional value depending on the type of ingredients used, even if they appear to be visually similar. In a similar strand, beverages such as sugar-free cola drinks will be indistinguishable from their regular varieties in an image. Many of the breads or wraps common in Southeast Asia can also have a range of different fillings which will significantly affect the nutritional content of the food. To improve accuracy and reduce the likelihood of incorrect food identification, these foods may require additional input from users. Apps could provide lists of suggested prompts to identify more details for users to select from: such as the type of filling contained in a sandwich; if a sauce or curry contains coconut milk; or a particular beverage presented in an image is sugar-free.
Dishes like curries and braised foods can have their sauce and solid items served in different proportions. In nutrient databases, solid components and the accompanying sauces are typically categorized together as a composite dish. However, separate nutrient values are not provided for sauces and the solid ingredients. This will lead to inaccuracies when converting calculated food volume into calories. Future technologies may need to explore solutions to identify and separate liquid and solid components of a single food item.
Measuring food intake from food photographs can also be deceiving. Some components of meals are sometimes not fully consumed, in the case of soups and gravies which can often be left unfinished. Inedible parts of the meal such as animal bones, nut and clam shells or certain herbs and spices may be present in the photograph but are duly discarded. This could contribute to inaccurate approximations of consumed volume. In the context of shared meals, taking pictures of sharing platters could also portray an inaccurate estimation of the amount consumed by the individual. This requires individuals to be more aware of only capturing food volume that they are consuming.
Given the large degree of dietary variation within Southeast Asia, there is a vast array of different dishes and ingredients that need to be collated. Databases will then be required to be very extensive to capture information accurately. Language barriers may also make it difficult to apply a single tool across the region. Therefore, applications and databases will need to be localized for them to work effectively with different populations.

7. Conclusions and Recommendations

This paper has explored the cultural hurdles to dietary assessment in the Southeast Asian region, the limitations of the current practices and approaches, and has reviewed novel ideas and concepts that aim to improve on these limitations. New computer vision technologies, especially smartphone-based tools incorporating machine learning and depth sensing, have considerable potential for nutritional precision on an individual level, as well as scalability to the larger population. Smartphone penetration in the Southeast Asian region has increased rapidly in recent years and the region may prove to be a fertile test bed for the development and integration of such applications. It is an advantageous time to capitalize on these advancements, and the development of a robust automated digital dietary assessment tool will likely enable governments, healthcare institutions, researchers and dietitians to be able to gather effective nutritional insight and combat the rising rates of obesity and diabetes in the region.
That being said, we recognize certain limitations in this review. Firstly, this paper does not fully explore the logistical and regulatory requirements of such a task, and more research will be required to determine the practical feasibility of applying such a technology in the field. The creation of a homogenized nutritional dataset with the inclusion of density factors to support such an endeavor is likely to be a challenging and expensive task, and will require much consideration. Secondly, given the rising popularity of food photography and widespread use of smartphones, this review has also focused primarily on image-based methods of digital dietary assessment and does not consider other forms of digital methods such as bar code scanners or text-based recording in the discussion. Lastly, though this review attempted to provide an extensive discussion about the complexity of the Southeast Asian diet, it is by no means comprehensive and we recognize that there are many ethnic and regional intricacies that were not fully articulated. As such, we recommend that future researchers intending to implement the technology carefully consider eating behaviors and practices unique to the localized environment in the design of their application.
The central axis of nutrition is the precise estimation of an individual’s food intake. Food intake measurements are critical for both prescriptive and diagnostic approaches in the provision of an optimal diet. Despite many decades of work in this area, there still remains considerable limitations. With the advent of the digital revolution and the use of machine learning and artificial intelligence, we now have the potential to greatly improve our ability to estimate food intake on both an individual and on a population level. Indeed, the time has come to recognize the limitations of the conventional methods of estimating nutrient intake and embrace the current advances in computing, technology and machine learning to resolve one of the most important questions in human nutrition.

Supplementary Materials

The following are available online at https://www.mdpi.com/2072-6643/12/4/1167/s1, Table S1: Table of image-based food volume estimation methods.

Author Contributions

Conceptualization, W.T. and C.J.H., writing—original draft preparation, W.T., writing—review and editing, W.T., B.K., R.Q., J.L., C.J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by A*STAR under its IAF-PP Food Structure Engineering for Nutrition and Health Programme (Grant ID No: H17/01/a0/A11 & H18/01/a0/B11).

Acknowledgments

The authors thank A*STAR for supporting this review.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. World Health Organization. Global Report on Diabetes: Executive Summary; World Health Organization: Geneva, Switzerland, 2016. [Google Scholar]
  2. FAO. Asia and the Pacific Regional Overview of Food Security and Nutrition 2018—Accelerating Progress through the SDGs; FAO: Bangkok, Thailand, 2018. [Google Scholar]
  3. Hwang, C.K.; Han, P.V.; Zabetian, A.; Ali, M.K.; Narayan, K.V. Rural diabetes prevalence quintuples over twenty-five years in low-and middle-income countries: A systematic review and meta-analysis. Diabetes Res. Clin. Pract. 2012, 96, 271–285. [Google Scholar] [CrossRef] [PubMed]
  4. Lim, R.B.T.; Chen, C.; Naidoo, N.; Gay, G.; Tang, W.E.; Seah, D.; Chen, R.; Tan, N.C.; Lee, J.; Tai, E.S.; et al. Anthropometrics indices of obesity, and all-cause and cardiovascular disease-related mortality, in an Asian cohort with type 2 diabetes mellitus. Diabetes Metab. 2015, 41, 291–300. [Google Scholar] [CrossRef] [PubMed]
  5. Priyadi, A.; Muhtadi, A.; Suwantika, A.; Sumiwi, S. An economic evaluation of diabetes mellitus management in South East Asia. J. Adv. Pharm. Educ. Res. 2019, 9, 53–74. [Google Scholar]
  6. Roglic, G.; Varghese, C.; Thamarangsi, T. Diabetes in South-East Asia: Burden, gaps, challenges and ways forward. Who South-East Asia J. Public Health 2016, 5, 1–4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Helble, M.; Francisco, K. The Upcoming Obesity Crisis in Asia and the Pacific: First Cost Estimates. ADBI Working Paper 743; Asian Development Bank Institute: Tokyo, Japan, 2017. [Google Scholar]
  8. Vorster, H.H.; Venter, C.S.; Wissing, M.P.; Margetts, B.M. The nutrition and health transition in the North West Province of South Africa: A review of the THUSA (Transition and Health during Urbanisation of South Africans) study. Public Health Nutr. 2005, 8, 480–490. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Steyn, N.P.; Mann, J.; Bennett, P.H.; Temple, N.; Zimmet, P.; Tuomilehto, J.; Lindström, J.; Louheranta, A. Diet, nutrition and the prevention of type 2 diabetes. Public Health Nutr. 2004, 7, 147–165. [Google Scholar] [CrossRef]
  10. Choi, Y.J.; Cho, Y.M.; Park, C.K.; Jang, H.C.; Park, K.S.; Kim, S.Y.; Lee, H.K. Rapidly increasing diabetes-related mortality with socio-environmental changes in South Korea during the last two decades. Diabetes Res. Clin. Pract. 2006, 74, 295–300. [Google Scholar] [CrossRef]
  11. Sobngwi, E.; Mbanya, J.-C.; Unwin, N.C.; Porcher, R.; Kengne, A.-P.; Fezeu, L.; Minkoulou, E.M.; Tournoux, C.; Gautier, J.-F.; Aspray, T.J.; et al. Exposure over the life course to an urban environment and its relation with obesity, diabetes, and hypertension in rural and urban Cameroon. Int. J. Epidemiol. 2004, 33, 769–776. [Google Scholar] [CrossRef] [Green Version]
  12. Nanditha, A.; Ma, R.C.W.; Ramachandran, A.; Snehalatha, C.; Chan, J.C.N.; Chia, K.S.; Shaw, J.E.; Zimmet, P.Z. Diabetes in Asia and the Pacific: Implications for the Global Epidemic. Diabetes Care 2016, 39, 472–485. [Google Scholar] [CrossRef] [Green Version]
  13. Eckert, S.; Kohler, S. Urbanization and health in developing countries: A systematic review. World Health Popul. 2014, 15, 7–20. [Google Scholar] [CrossRef]
  14. Tull, K. Urban Food Systems and Nutrition; Institute of Developmental Studies: Brighton, UK, 2018. [Google Scholar]
  15. Angkurawaranon, C.; Jiraporncharoen, W.; Chenthanakij, B.; Doyle, P.; Nitsch, D. Urban environments and obesity in southeast Asia: A systematic review, meta-analysis and meta-regression. PLoS ONE 2014, 9, e113547. [Google Scholar] [CrossRef] [PubMed]
  16. Allender, S.; Foster, C.; Hutchinson, L.; Arambepola, C. Quantification of urbanization in relation to chronic diseases in developing countries: A systematic review. J. Urban Health 2008, 85, 938–951. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. ASEAN/UNICEF/WHO. Regional Report on Nutrition Security in ASEAN, Volume 2; UNICEF: Bangkok, Thailand, 2016. [Google Scholar]
  18. Wong, L.Y.; Toh, M.P.H.S.; Tham, L.W.C. Projection of prediabetes and diabetes population size in Singapore using a dynamic Markov model. J. Diabetes 2017, 9, 65–75. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Sevenhuysen, G.P. Food composition databases: Current problems and solutions. In Food Nutrition & Agriculture; Lupien, J.R., Richmond, K.R., Papetti, M.A., Cotier, J.P., Ghazali, A., Dawson, R., Eds.; FAO: Bangkok, Thailand, 1994. [Google Scholar]
  20. Kapsokefalou, M.; Roe, M.; Turrini, A.; Costa, H.; Martinez de Victoria, E.; Marletta, L.; Berry, R.; Finglas, P. Food Composition at Present: New Challenges. Nutrients 2019, 11, 1714. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Chapparo, C.O.L.; Sethuraman, K. Overview of the Nutrition Situation in Seven Countries in Southeast Asia; Food and Nutrition Technical Assistance III Project (FANTA): Washington, DC, USA, 2014. [Google Scholar]
  22. Kasim, N.B.M.; Ahmad, M.H.; Shaharudin, A.B.; Naidu, B.M.; Ying, C.Y.; Tahir, H.; Aris, B. Food choices among Malaysian adults: Findings from Malaysian Adults Nutrition Survey (MANS) 2003 and MANS 2014. Malays. J. Nutr. 2018, 24, 63–75. [Google Scholar]
  23. Van Esterik, P. Food Culture in Southeast Asia; Greenwood Publishing Group: Westport, CT, USA, 2008. [Google Scholar]
  24. Shim, J.S.; Oh, K.; Kim, H.C. Dietary assessment methods in epidemiologic studies. Epidemiol. Health 2014, 36, e2014009. [Google Scholar] [CrossRef]
  25. Kirkpatrick, S.I.; Collins, C.E. Assessment of Nutrient Intakes: Introduction to the Special Issue. Nutrients 2016, 8, 184. [Google Scholar] [CrossRef] [Green Version]
  26. Subar, A.F.; Freedman, L.S.; Tooze, J.A.; Kirkpatrick, S.I.; Boushey, C.; Neuhouser, M.L.; Thompson, F.E.; Potischman, N.; Guenther, P.M.; Tarasuk, V.; et al. Addressing Current Criticism Regarding the Value of Self-Report Dietary Data. J. Nutr. 2015, 145, 2639–2645. [Google Scholar] [CrossRef] [Green Version]
  27. Thompson, F.E.; Subar, A.F.; Loria, C.M.; Reedy, J.L.; Baranowski, T. Need for technological innovation in dietary assessment. J. Am. Diet. Assoc. 2010, 110, 48–51. [Google Scholar] [CrossRef] [Green Version]
  28. Naska, A.; Lagiou, A.; Lagiou, P. Dietary assessment methods in epidemiological research: Current state of the art and future prospects. F1000Res 2017, 6, 926. [Google Scholar] [CrossRef] [Green Version]
  29. Hébert, J.R.; Hurley, T.G.; Steck, S.E.; Miller, D.R.; Tabung, F.K.; Peterson, K.E.; Kushi, L.H.; Frongillo, E.A. Considering the Value of Dietary Assessment Data in Informing Nutrition-Related Health Policy. Adv. Nutr. 2014, 5, 447–455. [Google Scholar] [CrossRef] [PubMed]
  30. Burrows, T.L.; Ho, Y.Y.; Rollo, M.E.; Collins, C.E. Validity of Dietary Assessment Methods When Compared to the Method of Doubly Labeled Water: A Systematic Review in Adults. Front. Endocrinol. 2019, 10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Cade, J.E. Measuring diet in the 21st century: Use of new technologies. Proc. Nutr. Soc. 2017, 76, 276–282. [Google Scholar] [CrossRef] [PubMed]
  32. Cordeiro, F.; Epstein, D.A.; Thomaz, E.; Bales, E.; Jagannathan, A.K.; Abowd, G.D.; Fogarty, J. Barriers and Negative Nudges: Exploring Challenges in Food Journaling. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 1159–1162. [Google Scholar]
  33. Thompson, F.E.; Subar, A.F. Chapter 1—Dietary Assessment Methodology. In Nutrition in the Prevention and Treatment of Disease, 4th ed.; Coulston, A.M., Boushey, C.J., Ferruzzi, M.G., Delahanty, L.M., Eds.; Academic Press: Cambridge, MA, USA, 2017; pp. 5–48. [Google Scholar]
  34. OECD. Southeast Asia Going Digital: Connecting SMEs; OECD: Paris, France, 2019. [Google Scholar]
  35. Boushey, C.J.; Spoden, M.; Zhu, F.M.; Delp, E.J.; Kerr, D.A. New mobile methods for dietary assessment: Review of image-assisted and image-based dietary assessment methods. Proc. Nutr. Soc. 2017, 76, 283–294. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Arnhold, M.; Quade, M.; Kirch, W. Mobile applications for diabetics: A systematic review and expert-based usability evaluation considering the special requirements of diabetes patients age 50 years or older. J. Med. Internet Res. 2014, 16, e104. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  37. West, J.H.; Belvedere, L.M.; Andreasen, R.; Frandsen, C.; Hall, P.C.; Crookston, B.T. Controlling Your "App"etite: How Diet and Nutrition-Related Mobile Apps Lead to Behavior Change. JMIR Mhealth Uhealth 2017, 5, e95. [Google Scholar] [CrossRef] [PubMed]
  38. Pendergast, F.J.; Ridgers, N.D.; Worsley, A.; McNaughton, S.A. Evaluation of a smartphone food diary application using objectively measured energy expenditure. Int. J. Behav. Nutr. Phys. Act. 2017, 14, 30. [Google Scholar] [CrossRef] [Green Version]
  39. Tanchoco, C.C. Food- based dietary guidelines for Filipinos: Retrospects and prospects. Asia Pac. J. Clin. Nutr. 2011, 20, 462–471. [Google Scholar]
  40. Paik, H.Y. The issues in assessment and evaluation of diet in Asia. Asia Pac. J. Clin. Nutr. 2008, 17 (Suppl. 1), 294–295. [Google Scholar]
  41. Chong, K.H.; Wu, S.K.; Noor Hafizah, Y.; Bragt, M.C.; Poh, B.K. Eating Habits of Malaysian Children: Findings of the South East Asian Nutrition Surveys (SEANUTS). Asia-Pac. J. Public Health 2016, 28, 59s–73s. [Google Scholar] [CrossRef]
  42. Board, H.P. National Nutrition Survey 2010; Health Promotion Board: Singapore, 2010.
  43. Whitton, C.; Ma, Y.; Bastian, A.C.; Fen Chan, M.; Chew, L. Fast-food consumers in Singapore: Demographic profile, diet quality and weight status. Public Health Nutr. 2014, 17, 1805–1813. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  44. Song, S.; Song, W.O. National nutrition surveys in Asian countries: Surveillance and monitoring efforts to improve global health. Asia Pac. J. Clin. Nutr. 2014, 23, 514–523. [Google Scholar] [CrossRef]
  45. Fellows, P.; Hilmi, M. Selling street and snack foods. In FAO Diversification Booklet; FAO: Rome, Italy, 2011. [Google Scholar]
  46. RYC, Q.; GH, J.; CJ, H. Energy density of ethnic cuisines in Singaporean hawker centres: A comparative study of Chinese, Malay, and Indian foods. Malays. J. Nutr. 2019, 25, 175–188. [Google Scholar]
  47. Johnson, R.K. Dietary Intake—How Do We Measure What People Are Really Eating? Obes. Res. 2002, 10, 63S–68S. [Google Scholar] [CrossRef] [PubMed]
  48. Scagliusi, F.B.; Ferriolli, E.; Pfrimer, K.; Laureano, C.; Cunha, C.S.; Gualano, B.; Lourenco, B.H.; Lancha, A.H., Jr. Underreporting of energy intake in Brazilian women varies according to dietary assessment: A cross-sectional study using doubly labeled water. J. Am. Diet. Assoc. 2008, 108, 2031–2040. [Google Scholar] [CrossRef] [PubMed]
  49. Rebro, S.M.; Patterson, R.E.; Kristal, A.R.; Cheney, C.L. The effect of keeping food records on eating patterns. J. Am. Diet. Assoc. 1998, 98, 1163–1165. [Google Scholar] [CrossRef]
  50. Archundia Herrera, M.C.; Chan, C.B. Narrative Review of New Methods for Assessing Food and Energy Intake. Nutrients 2018, 10, 1064. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Eldridge, A.L.; Piernas, C.; Illner, A.-K.; Gibney, M.J.; Gurinović, M.A.; De Vries, J.H.M.; Cade, J.E. Evaluation of New Technology-Based Tools for Dietary Intake Assessment—An ILSI Europe Dietary Intake and Exposure Task Force Evaluation. Nutrients 2018, 11, 55. [Google Scholar] [CrossRef] [Green Version]
  52. Amoutzopoulos, B.; Steer, T.; Roberts, C.; Cade, J.E.; Boushey, C.J.; Collins, C.E.; Trolle, E.; Boer, E.J.D.; Ziauddeen, N.; Van Rossum, C.; et al. Traditional methods v. new technologies—Dilemmas for dietary assessment in large-scale nutrition surveys and studies: A report following an international panel discussion at the 9th International Conference on Diet and Activity Methods (ICDAM9), Brisbane, 3 September 2015. J. Nutr. Sci. 2018, 7, e11. [Google Scholar] [CrossRef] [Green Version]
  53. Carter, M.C.; Burley, V.J.; Nykjaer, C.; Cade, J.E. ‘My Meal Mate’ (MMM): Validation of the diet measures captured on a smartphone application to facilitate weight loss. Br. J. Nutr. 2013, 109, 539–546. [Google Scholar] [CrossRef] [Green Version]
  54. Guan, V.X.; Probst, Y.C.; Neale, E.P.; Tapsell, L.C. Evaluation of the dietary intake data coding process in a clinical setting: Implications for research practice. PLoS ONE 2019, 14, e0221047. [Google Scholar] [CrossRef] [PubMed]
  55. Conway, R.; Robertson, C.; Dennis, B.; Stamler, J.; Elliott, P. Standardised coding of diet records: Experiences from INTERMAP UK. Br. J. Nutr. 2004, 91, 765–771. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  56. Labrique, A.; Mehra, S.; M, M. The Use of New and Existing Tools and Technologies to Support the Global Nutrition Agenda: The innovation opportunity. In Good Nutrition: Perspectives for the 21st Century; Eggersdorfen, M., Kraemer, K., Cordaro, J., Fanzo, J., Gibney, M., Kennedy, E., Labrique, A., Steffen, J., Eds.; Karger: Basel, Switzerland, 2016; pp. 209–219. [Google Scholar]
  57. Chui, T. Validation Study of a Passive Image-Assisted Dietary Assessment Method with Automated Image Analysis Process; University of Tennessee: Knoxville, Tennessee, 2018. [Google Scholar]
  58. Dennison, L.; Morrison, L.; Conway, G.; Yardley, L. Opportunities and challenges for smartphone applications in supporting health behavior change: Qualitative study. J. Med. Internet Res. 2013, 15, e86. [Google Scholar] [CrossRef] [PubMed]
  59. Adams, S.H.; Anthony, J.C.; Carvajal, R.; Chae, L.; Khoo, C.S.H.; Latulippe, M.E.; Matusheski, N.V.; McClung, H.L.; Rozga, M.; Schmid, C.H.; et al. Perspective: Guiding Principles for the Implementation of Personalized Nutrition Approaches That Benefit Health and Function. Adv. Nutr. 2019. [Google Scholar] [CrossRef] [PubMed]
  60. Fu, H.N.; Adam, T.J.; Konstan, J.A.; Wolfson, J.A.; Clancy, T.R.; Wyman, J.F. Influence of Patient Characteristics and Psychological Needs on Diabetes Mobile App Usability in Adults With Type 1 or Type 2 Diabetes: Crossover Randomized Trial. JMIR Diabetes 2019, 4, e11462. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  61. Liew, M.S.; Zhang, J.; See, J.; Ong, Y.L. Usability Challenges for Health and Wellness Mobile Apps: Mixed-Methods Study Among mHealth Experts and Consumers. JMIR Mhealth Uhealth 2019, 7, e12160. [Google Scholar] [CrossRef]
  62. Koot, D.; Goh, P.S.C.; Lim, R.S.M.; Tian, Y.; Yau, T.Y.; Tan, N.C.; Finkelstein, E.A. A Mobile Lifestyle Management Program (GlycoLeap) for People with Type 2 Diabetes: Single-Arm Feasibility Study. JMIR Mhealth Uhealth 2019, 7, e12965. [Google Scholar] [CrossRef] [Green Version]
  63. Thompson-Felty, C.; Johnston, C.S. Adherence to Diet Applications Using a Smartphone Was Associated With Weight Loss in Healthy Overweight Adults Irrespective of the Application. J. Diabetes Sci. Technol. 2017, 11, 184–185. [Google Scholar] [CrossRef] [Green Version]
  64. Martin, C.K.; Correa, J.B.; Han, H.; Allen, H.R.; Rood, J.C.; Champagne, C.M.; Gunturk, B.K.; Bray, G.A. Validity of the Remote Food Photography Method (RFPM) for estimating energy and nutrient intake in near real-time. Obesity 2012, 20, 891–899. [Google Scholar] [CrossRef] [Green Version]
  65. Ming, Z.-Y.; Chen, J.; Cao, Y.; Forde, C.; Ngo, C.-W.; Chua, T. Food Photo Recognition for Dietary Tracking: System and Experiment; Springer: Berlin/Heidelberg, Germany, 2018; pp. 129–141. [Google Scholar]
  66. Gibney, M.; Walsh, M.; Goosens, J. Personalized Nutrition: Paving the way to better population health. In Good Nutrition: Perspectives for the 21st Century; Eggersdorfen, M., Kraemer, K., Cordaro, J., Fanzo, J., Gibney, M., Kennedy, E., Labrique, A., Steffen, J., Eds.; Karger: Basel, Switzerland, 2016; pp. 235–248. [Google Scholar]
  67. Samoggia, A.; Riedel, B. Assessment of nutrition-focused mobile apps’ influence on consumers’ healthy food behaviour and nutrition knowledge. Food Res. Int. 2020, 128, 108766. [Google Scholar] [CrossRef]
  68. Issom, D.Z.; Woldaregay, A.Z.; Chomutare, T.; Bradway, M.; Årsand, E.; Hartvigsen, G. Mobile applications for people with diabetes published between 2010 and 2015. Diabetes Manag. 2015, 5, 539–550. [Google Scholar] [CrossRef]
  69. De Cock, N.; Vangeel, J.; Lachat, C.; Beullens, K.; Vervoort, L.; Goossens, L.; Maes, L.; Deforche, B.; De Henauw, S.; Braet, C.; et al. Use of Fitness and Nutrition Apps: Associations With Body Mass Index, Snacking, and Drinking Habits in Adolescents. JMIR Mhealth Uhealth 2017, 5, e58. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  70. Teixeira, V.; Voci, S.M.; Mendes-Netto, R.S.; Da Silva, D.G. The relative validity of a food record using the smartphone application MyFitnessPal. Nutr. Diet. J. Dietit. Assoc. Aust. 2018, 75, 219–225. [Google Scholar] [CrossRef] [PubMed]
  71. Braz, V.N.; Lopes, M. Evaluation of mobile applications related to nutrition. Public Health Nutr. 2019, 22, 1209–1214. [Google Scholar] [CrossRef] [Green Version]
  72. Hoj, T.H.; Covey, E.L.; Jones, A.C.; Haines, A.C.; Hall, P.C.; Crookston, B.T.; West, J.H. How Do Apps Work? An Analysis of Physical Activity App Users’ Perceptions of Behavior Change Mechanisms. JMIR Mhealth Uhealth 2017, 5, e114. [Google Scholar] [CrossRef]
  73. Commission, E. Green Paper on Mobile Health (“mHealth”); European Commission: Brussels, Belgium, 2014. [Google Scholar]
  74. Ryan, E.A.; Holland, J.; Stroulia, E.; Bazelli, B.; Babwik, S.A.; Li, H.; Senior, P.; Greiner, R. Improved A1C Levels in Type 1 Diabetes with Smartphone App Use. Can. J. Diabetes 2017, 41, 33–40. [Google Scholar] [CrossRef] [Green Version]
  75. Woo, I.; Otsmo, K.; Kim, S.; Ebert, D.S.; Delp, E.J.; Boushey, C.J. Automatic portion estimation and visual refinement in mobile dietary assessment. Comput. Imaging Viii 2010, 7533, 75330O. [Google Scholar]
  76. MacNeill, V.; Foley, M.; Quirk, A.; McCambridge, J. Shedding light on research participation effects in behaviour change trials: A qualitative study examining research participant experiences. BMC Public Health 2016, 16, 91. [Google Scholar] [CrossRef] [Green Version]
  77. FDA. Policy for Device Software Functions and Mobile Medical Applications: Guidance for Industry and Food and Drug Administration Staff; US Food and Drug Administration: White Oak, MA, USA, 2019.
  78. Illner, A.K.; Freisling, H.; Boeing, H.; Huybrechts, I.; Crispim, S.P.; Slimani, N. Review and evaluation of innovative technologies for measuring diet in nutritional epidemiology. Int J. Epidemiol 2012, 41, 1187–1203. [Google Scholar] [CrossRef] [Green Version]
  79. Forster, H.; Walsh, M.C.; Gibney, M.J.; Brennan, L.; Gibney, E.R. Personalised nutrition: The role of new dietary assessment methods. Proc. Nutr. Soc. 2016, 75, 96–105. [Google Scholar] [CrossRef] [Green Version]
  80. Burrows, T.L.; Rollo, M.E. Advancement in Dietary Assessment and Self-Monitoring Using Technology. Nutrients 2019, 11, 1648. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  81. Anthimopoulos, M.; Dehais, J.; Mougiakakou, S. Performance Evaluation Methods of Computer Vision Systems for Meal Assessment. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Amsterdam, The Netherlands, 16 October 2016; pp. 83–87. [Google Scholar]
  82. Zhu, F.; Bosch, M.; Khanna, N.; Boushey, C.J.; Delp, E.J. Multiple Hypotheses Image Segmentation and Classification With Application to Dietary Assessment. IEEE J. Biomed. Health Inform. 2015, 19, 377–388. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  83. Wang, Y.; Chen, J.J.; Ngo, C.W.; Chua, T.S.; Zuo, W.; Ming, Z. Mixed Dish Recognition through Multi-Label Learning. In CEA ’19: Proceedings of the 11th Workshop on Multimedia for Cooking and Eating Activities; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar]
  84. Wang, Y.; He, Y.; Boushey, C.J.; Zhu, F.; Delp, E.J. Context Based Image Analysis with Application in Dietary Assessment and Evaluation. Multimed. Tools Appl. 2018, 77, 19769–19794. [Google Scholar] [CrossRef] [PubMed]
  85. Subhi, M.A.; Ali, S.H.; Mohammed, M.A. Vision-Based Approaches for Automatic Food Recognition and Dietary Assessment: A Survey. IEEE Access 2019, 7, 35370–35381. [Google Scholar] [CrossRef]
  86. Yue, Y.; Jia, W.; Sun, M. Measurement of food volume based on single 2-D image without conventional camera calibration. In Proceedings of the 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, USA, 28 August–1 September 2012; pp. 2166–2169. [Google Scholar]
  87. Jia, W.; Yue, Y.; Fernstrom, J.D.; Yao, N.; Sclabassi, R.J.; Fernstrom, M.H.; Sun, M. Imaged based estimation of food volume using circular referents in dietary assessment. J. Food Eng. 2012, 109, 76–86. [Google Scholar] [CrossRef] [Green Version]
  88. Fang, S.; Shao, Z.; Kerr, D.A.; Boushey, C.J.; Zhu, F. An End-to-End Image-Based Automatic Food Energy Estimation Technique Based on Learned Energy Distribution Images: Protocol and Methodology. Nutrients 2019, 11. [Google Scholar] [CrossRef] [Green Version]
  89. Dehais, J.; Anthimopoulos, M.; Shevchik, S.; Mougiakakou, S. Two-View 3D Reconstruction for Food Volume Estimation. IEEE Trans. Multimed. 2017, 19, 1090–1099. [Google Scholar] [CrossRef] [Green Version]
  90. Chae, J.; Woo, I.; Kim, S.; Maciejewski, R.; Zhu, F.; Delp, E.J.; Boushey, C.J.; Ebert, D.S. Volume Estimation Using Food Specific Shape Templates in Mobile Image-Based Dietary Assessment. Proc. Spie-Int. Soc. Opt. Eng. 2011, 7873, 78730k. [Google Scholar] [CrossRef] [Green Version]
  91. Xu, C.; He, Y.; Khanna, N.; Boushey, C.J.; Delp, E.J. Model-based food volume estimation using 3D pose. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 2534–2538. [Google Scholar]
  92. He, Y.; Xu, C.; Khanna, N.; Boushey, C.J.; Delp, E.J. Food image analysis: Segmentation, identification, and weight estimation. In Proceedings of the IEEE International Conference on Multimedia and Expo, San Jose, CA, USA, 15–19 July 2013. [Google Scholar]
  93. Rahman, M.H.; Li, Q.; Pickering, M.; Frater, M.; Kerr, D.; Bouchey, C.; Delp, E. Food volume estimation in a mobile phone based dietary assessment system. In Proceedings of the 2012 Eighth International Conference on Signal Image Technology and Internet Based Systems, Sorrento, Naples, Italy, 25–29 November 2012; pp. 988–995. [Google Scholar]
  94. Puri, M.; Zhiwei, Z.; Yu, Q.; Divakaran, A.; Sawhney, H. Recognition and volume estimation of food intake using a mobile device. In Proceedings of the 2009 Workshop on Applications of Computer Vision (WACV), Snowbird, UT, USA, 7–8 December 2009; pp. 1–8. [Google Scholar]
  95. Martin, C.K.; Kaya, S.; Gunturk, B.K. Quantification of food intake using food image analysis. In Proceedings of the 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Minneapolis, MN, USA, 3–6 September 2009; pp. 6869–6872. [Google Scholar]
  96. Rhyner, D.; Loher, H.; Dehais, J.; Anthimopoulos, M.; Shevchik, S.; Botwey, R.H.; Duke, D.; Stettler, C.; Diem, P.; Mougiakakou, S. Carbohydrate estimation by a mobile phone-based system versus self-estimations of individuals with type 1 diabetes mellitus: A comparative study. J. Med. Internet Res. 2016, 18, e101. [Google Scholar] [CrossRef] [Green Version]
  97. Ege, T.; Shimoda, W.; Yanai, K. A New Large-scale Food Image Segmentation Dataset and Its Application to Food Calorie Estimation Based on Grains of Rice. In Proceedings of the 5th International Workshop on Multimedia Assisted Dietary Management, Nice, France, 21–25 October 2019; pp. 82–87. [Google Scholar]
  98. Akpa, E.H.; Suwa, H.; Arakawa, Y.; Yasumoto, K. Smartphone-Based Food Weight and Calorie Estimation Method for Effective Food Journaling. Sice J. ControlMeas. Syst. Integr. 2017, 10, 360–369. [Google Scholar] [CrossRef] [Green Version]
  99. Liang, Y.; Li, J. Deep Learning-Based Food Calorie Estimation Method in Dietary Assessment. arXiv 2017, arXiv:1706.04062. [Google Scholar]
  100. Villalobos, G.; Almaghrabi, R.; Pouladzadeh, P.; Shirmohammadi, S. An image procesing approach for calorie intake measurement. In Proceedings of the 2012 IEEE International Symposium on Medical Measurements and Applications Proceedings, Budapest, Hungary, 18–19 May 2012; pp. 1–5. [Google Scholar]
  101. Pouladzadeh, P.; Shirmohammadi, S.; Al-Maghrabi, R. Measuring Calorie and Nutrition from Food Image. IEEE Trans. Instrum. Meas. 2014, 63, 1947–1956. [Google Scholar] [CrossRef]
  102. Subhi, M.A.; Ali, S.H.M.; Ismail, A.G.; Othman, M. Food volume estimation based on stereo image analysis. Ieee Instrum. Meas. Mag. 2018, 21, 36–43. [Google Scholar] [CrossRef]
  103. Shang, J.; Duong, M.; Pepin, E.; Zhang, X.; Sandara-Rajan, K.; Mamishev, A.; Kristal, A. A mobile structured light system for food volume estimation. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 100–101. [Google Scholar]
  104. Makhsous, S.; Mohammad, H.M.; Schenk, J.M.; Mamishev, A.V.; Kristal, A.R. A Novel Mobile Structured Light System in Food 3D Reconstruction and Volume Estimation. Sensors 2019, 19, 564. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  105. Tanno, R.; Ege, T.; Yanai, K. AR DeepCalorieCam V2: Food calorie estimation with cnn and ar-based actual size estimation. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, Tokyo, Japan, November 28–December 1 2018; p. 46. [Google Scholar]
  106. Yang, Y.; Jia, W.; Bucher, T.; Zhang, H.; Sun, M. Image-based food portion size estimation using a smartphone without a fiducial marker. Public Health Nutr. 2019, 22, 1180–1192. [Google Scholar] [CrossRef] [PubMed]
  107. Chen, M.-Y.; Yang, Y.-H.; Ho, C.-J.; Wang, S.-H.; Liu, S.-M.; Chang, E.; Yeh, C.-H.; Ouhyoung, M. Automatic Chinese Food Identification and Quantity Estimation; Association for Computing Machinery: Singapore, 2012; p. 29. [Google Scholar]
  108. Lo, F.P.; Sun, Y.; Qiu, J.; Lo, B. Food Volume Estimation Based on Deep Learning View Synthesis from a Single Depth Map. Nutrients 2018, 10. [Google Scholar] [CrossRef] [Green Version]
  109. Lo, P.W.; Sun, Y.; Qiu, J.; Lo, B. Point2Volume: A Vision-based Dietary Assessment Approach using View Synthesis. IEEE Trans. Ind. Inform. 2019. [Google Scholar] [CrossRef]
  110. Meyers, A.; Johnston, N.; Rathod, V.; Korattikara, A.; Gorban, A.; Silberman, N.; Guadarrama, S.; Papandreou, G.; Huang, J.; Murphy, K.P. Im2Calories: Towards an automated mobile vision food diary. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1233–1241. [Google Scholar]
  111. Ando, Y.; Ege, T.; Cho, J.; Yanai, K. DepthCalorieCam: A mobile application for volume-based food calorie estimation using depth cameras. In MADiMa ’19: Proceedings of the 5th International Workshop on Multimedia Assisted Dietary Management; Association for Computing Machinery: New York, NY, USA, 2019. [Google Scholar]
  112. Zhang, W.; Yu, Q.; Siddiquie, B.; Divakaran, A.; Sawhney, H. “Snap-n-Eat” Food Recognition and Nutrition Estimation on a Smartphone. J. Diabetes Sci. Technol. 2015, 9, 525–533. [Google Scholar] [CrossRef] [Green Version]
  113. Okamoto, K.; Yanai, K. An automatic calorie estimation system of food images on a smartphone. In Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management, Amsterdam, The Netherlands, 16 October 2016; pp. 63–70. [Google Scholar]
  114. Jia, W.; Chen, H.C.; Yue, Y.; Li, Z.; Fernstrom, J.; Bai, Y.; Li, C.; Sun, M. Accuracy of food portion size estimation from digital pictures acquired by a chest-worn camera. Public Health Nutr. 2014, 17, 1671–1681. [Google Scholar] [CrossRef] [Green Version]
  115. Zhou, L.; Zhang, C.; Liu, F.; Qiu, Z.; He, Y. Application of Deep Learning in Food: A Review. Compr. Rev. Food Sci. Food Saf. 2019, 18, 1793–1811. [Google Scholar] [CrossRef] [Green Version]
  116. Ege, T.; Yanai, K. Simultaneous estimation of food categories and calories with multi-task CNN. In Proceedings of the 2017 Fifteenth IAPR International Conference on Machine Vision Applications (MVA), Nagoya, Japan, 8–12 May 2017; pp. 198–201. [Google Scholar]
  117. Runar, J.; Eirik Bø, K.; Aline Iyagizeneza, W. A Deep Learning Segmentation Approach to Calories and Weight Estimation of Food Images; University of Agder: Kristiansand, Norway, 2019. [Google Scholar]
  118. Chokr, M.; Elbassuoni, S. Calories Prediction from Food Images; AAAI Press: San Francisco, CA, USA, 2017; pp. 4664–4669. [Google Scholar]
  119. Chen, M.; Dhingra, K.; Wu, W.; Yang, L.; Sukthankar, R.; Yang, J. PFID: Pittsburgh fast-food image dataset. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 289–292. [Google Scholar]
  120. Christ, P.F.; Schlecht, S.; Ettlinger, F.; Grün, F.; Heinle, C.; Tatavatry, S.; Ahmadi, S.; Diepold, K.; Menze, B.H. Diabetes60—Inferring Bread Units From Food Images Using Fully Convolutional Neural Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, 22–29 Octomber 2017; pp. 1526–1535. [Google Scholar]
  121. Elmadfa, I.; Meyer, A.L. Importance of food composition data to nutrition and public health. Eur. J. Clin. Nutr. 2010, 64, S4–S7. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  122. Hulshof, P.; Doets, E.; Seyha, S.; Bunthang, T.; Vonglokham, M.; Kounnavong, S.; Famida, U.; Muslimatun, S.; Santika, O.; Prihatini, S.; et al. Food Composition Tables in Southeast Asia: The Contribution of the SMILING Project. Matern. Child. Health J. 2019, 23, 46–54. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  123. Merchant, A.T.; Dehghan, M. Food composition database development for between country comparisons. Nutr. J. 2006, 5, 2. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  124. Puwastien, P. Issues in the development and use of food composition databases. Public Health Nutr. 2003, 5, 991–999. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  125. Partridge, E.K.; Neuhouser, M.L.; Breymeyer, K.; Schenk, J.M. Comparison of Nutrient Estimates Based on Food Volume versus Weight: Implications for Dietary Assessment Methods. Nutrients 2018, 10, 973. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  126. Stumbo, P.J.; Weiss, R. Using database values to determine food density. J. Food Compos. Anal. 2011, 24, 1174–1176. [Google Scholar] [CrossRef]
  127. Hayowitz, D.; Ahuja, J.; Showell, B.; Somanchi, M.; Nickle, M.; Nguyen, Q.; Williams, J.; Roseland, J.; Khan, M.; Patterson, K.; et al. USDA National Nutrient Database for Standard Reference, Release 28; US Department of Agriculture ARS, Nutrient Data Laboratory, Eds.; USDA: Washington, DC, USA, 2015.
  128. Charrondiere, U.R.; Haytowitz, D.; Stadlmayr, B. FAO/INFOODS Density Database Version 2.0. In Food and Agriculture Organization of the United Nations Technical Workshop Report 2012; USDA: Washington, DC, USA, 2012. [Google Scholar]
  129. Xu, C.; He, Y.; Khannan, N.; Parra, A.; Boushey, C.; Delp, E. Image-based food volume estimation. In Proceedings of the 5th International Workshop on Multimedia for Cooking & Eating Activities, Barcelona, Spain, 21 October 2013; pp. 75–80. [Google Scholar]
Figure 1. An illustration of the various steps of image-based automated dietary assessment—Segmentation, Classification, Volume Assessment and Nutrient Derivation.
Figure 1. An illustration of the various steps of image-based automated dietary assessment—Segmentation, Classification, Volume Assessment and Nutrient Derivation.
Nutrients 12 01167 g001
Figure 2. Measuring relative volumes by pixel density, Liang and Li. 2019.
Figure 2. Measuring relative volumes by pixel density, Liang and Li. 2019.
Nutrients 12 01167 g002
Figure 3. Food volume estimation using geometric modelling. (A) Movable spherical cap as adapted from Jia et al. 2014; (B) Projected variable cube using AR technology as adapted from Yang et al. 2019.
Figure 3. Food volume estimation using geometric modelling. (A) Movable spherical cap as adapted from Jia et al. 2014; (B) Projected variable cube using AR technology as adapted from Yang et al. 2019.
Nutrients 12 01167 g003
Figure 4. Depth map of a mango captured with a structured light system, Makhsous et al. 2019.
Figure 4. Depth map of a mango captured with a structured light system, Makhsous et al. 2019.
Nutrients 12 01167 g004
Figure 5. 3D reconstruction of various food models with deep learning view synthesis, Lo et al. 2019.
Figure 5. 3D reconstruction of various food models with deep learning view synthesis, Lo et al. 2019.
Nutrients 12 01167 g005

Share and Cite

MDPI and ACS Style

Tay, W.; Kaur, B.; Quek, R.; Lim, J.; Henry, C.J. Current Developments in Digital Quantitative Volume Estimation for the Optimisation of Dietary Assessment. Nutrients 2020, 12, 1167. https://doi.org/10.3390/nu12041167

AMA Style

Tay W, Kaur B, Quek R, Lim J, Henry CJ. Current Developments in Digital Quantitative Volume Estimation for the Optimisation of Dietary Assessment. Nutrients. 2020; 12(4):1167. https://doi.org/10.3390/nu12041167

Chicago/Turabian Style

Tay, Wesley, Bhupinder Kaur, Rina Quek, Joseph Lim, and Christiani Jeyakumar Henry. 2020. "Current Developments in Digital Quantitative Volume Estimation for the Optimisation of Dietary Assessment" Nutrients 12, no. 4: 1167. https://doi.org/10.3390/nu12041167

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop