*2.1. Sensory Session Description and Video Analysis*

A total of 88 participants (34 Asians and 54 non-Asians) were recruited from the pool of staff and students at the University of Melbourne (UoM), Australia. The Asian participants were from countries such as China, Malaysia, Vietnam, Philippines, India, and Indonesia, while the non-Asians were from Australia, New Zealand, Mexico, Colombia, Germany, Ukraine, and the United States of America. The ethics considerations were related to the minimal risks associated with personal information and image/video acquisition from participants. Ethics approval was granted by the Faculty of Veterinary and Agricultural Sciences (FVAS) from the UoM with ID: 1545786.2. Furthermore, proper data handling and storage have been followed for security reasons and will be stored for five years. The participants were asked to sign a consent form for them to allow being video-recorded and to specify any allergies

to assess whether they may take the test. The only information that was provided to participants about the samples along with the consent form previous to the test was "Some samples may contain insects", due to the allergic reactions that these may cause. A power analysis was performed using SAS® Power and Sample Size version 14.1 software (SAS Institute Inc., Cary, NC, USA), showing that the number of participants per culture was sufficient to find significant differences among the samples and cultures (Power: 1 − β > 0.99). The session was conducted in individual sensory booths located in the sensory laboratory belonging to the FVAS of the UoM (Melbourne, Australia) using the BioSensory app [30] to display the questionnaire and automatically record videos of the participants while assessing the insect-based food samples. The participants were asked to taste two control and three different insect-based food samples (Table 1), one at a time, and rate their liking for different descriptors as well as to indicate how they felt towards the different samples using a FaceScale (FS; Table 2) and check all that apply (CATA) tests for different emojis that best represented their emotions towards the sample (Table 3). The samples were assigned a 3-digit random number and served at room temperature (20 ◦C); the avocado was prepared just before serving, and drops of lime juice were added to avoid oxidation and darkening. The participants were provided with water and water crackers as palate cleansers between samples. *Foods* **2019** *8*, x FOR PEER REVIEW 3 of 20 information that was provided to participants about the samples along with the consent form previous to the test was "Some samples may contain insects", due to the allergic reactions that these may cause. A power analysis was performed using SAS® Power and Sample Size version 14.1 software (SAS Institute Inc., Cary, NC, USA), showing that the number of participants per culture was sufficient to find significant differences among the samples and cultures (Power: 1 − β > 0.99). The session was conducted in individual sensory booths located in the sensory laboratory belonging to the FVAS of the UoM (Melbourne, Australia) using the BioSensory app [30] to display the questionnaire and automatically record videos of the participants while assessing the insect‐based food samples. The participants were asked to taste two control and three different insect‐based food samples (Table 1), one at a time, and rate their liking for different descriptors as well as to indicate how they felt towards the different samples using a FaceScale (FS; Table 2) and check all that apply (CATA) tests for different emojis that best represented their emotions towards the sample (Table 3). The samples were assigned a 3‐digit random number and served at room temperature (20 °C); the avocado was prepared just before serving, and drops of lime juice were added to avoid oxidation and darkening. The participants were provided with water and water crackers as palate cleansers between samples. information that was provided to participants about the samples along with the consent form previous to the test was "Some samples may contain insects", due to the allergic reactions that these may cause. A power analysis was performed using SAS® Power and Sample Size version 14.1 software (SAS Institute Inc., Cary, NC, USA), showing that the number of participants per culture was sufficient to find significant differences among the samples and cultures (Power: 1 − β > 0.99). The session was conducted in individual sensory booths located in the sensory laboratory belonging to the FVAS of the UoM (Melbourne, Australia) using the BioSensory app [30] to display the questionnaire and automatically record videos of the participants while assessing the insect‐based food samples. The participants were asked to taste two control and three different insect‐based food samples (Table 1), one at a time, and rate their liking for different descriptors as well as to indicate how they felt towards the different samples using a FaceScale (FS; Table 2) and check all that apply (CATA) tests for different emojis that best represented their emotions towards the sample (Table 3). The samples were assigned a 3‐digit random number and served at room temperature (20 °C); the avocado was prepared just before serving, and drops of lime juice were added to avoid oxidation and darkening. The participants were provided with water and water crackers as palate cleansers between samples. **Table 1.** Image and description of samples used in the sensory session. may cause. A power analysis was performed using SAS® Power and Sample Size version 14.1 software (SAS Institute Inc., Cary, NC, USA), showing that the number of participants per culture was sufficient to find significant differences among the samples and cultures (Power: 1 − β > 0.99). The session was conducted in individual sensory booths located in the sensory laboratory belonging to the FVAS of the UoM (Melbourne, Australia) using the BioSensory app [30] to display the questionnaire and automatically record videos of the participants while assessing the insect‐based food samples. The participants were asked to taste two control and three different insect‐based food samples (Table 1), one at a time, and rate their liking for different descriptors as well as to indicate how they felt towards the different samples using a FaceScale (FS; Table 2) and check all that apply (CATA) tests for different emojis that best represented their emotions towards the sample (Table 3). The samples were assigned a 3‐digit random number and served at room temperature (20 °C); the avocado was prepared just before serving, and drops of lime juice were added to avoid oxidation and darkening. The participants were provided with water and water crackers as palate cleansers between samples. **Table 1.** Image and description of samples used in the sensory session. **Sample Image Sample Description** software (SAS Institute Inc., Cary, NC, USA), showing that the number of participants per culture was sufficient to find significant differences among the samples and cultures (Power: 1 − β > 0.99). The session was conducted in individual sensory booths located in the sensory laboratory belonging to the FVAS of the UoM (Melbourne, Australia) using the BioSensory app [30] to display the questionnaire and automatically record videos of the participants while assessing the insect‐based food samples. The participants were asked to taste two control and three different insect‐based food samples (Table 1), one at a time, and rate their liking for different descriptors as well as to indicate how they felt towards the different samples using a FaceScale (FS; Table 2) and check all that apply (CATA) tests for different emojis that best represented their emotions towards the sample (Table 3). The samples were assigned a 3‐digit random number and served at room temperature (20 °C); the avocado was prepared just before serving, and drops of lime juice were added to avoid oxidation and darkening. The participants were provided with water and water crackers as palate cleansers between samples. **Table 1.** Image and description of samples used in the sensory session. **Sample Image Sample Description** The session was conducted in individual sensory booths located in the sensory laboratory belonging to the FVAS of the UoM (Melbourne, Australia) using the BioSensory app [30] to display the questionnaire and automatically record videos of the participants while assessing the insect‐based food samples. The participants were asked to taste two control and three different insect‐based food samples (Table 1), one at a time, and rate their liking for different descriptors as well as to indicate how they felt towards the different samples using a FaceScale (FS; Table 2) and check all that apply (CATA) tests for different emojis that best represented their emotions towards the sample (Table 3). The samples were assigned a 3‐digit random number and served at room temperature (20 °C); the avocado was prepared just before serving, and drops of lime juice were added to avoid oxidation and darkening. The participants were provided with water and water crackers as palate cleansers between samples. **Table 1.** Image and description of samples used in the sensory session. **Sample Image Sample Description** Tortilla chip with cornflour

*Foods* **2019** *8*, x FOR PEER REVIEW 3 of 20

information that was provided to participants about the samples along with the consent form

*Foods* **2019** *8*, x FOR PEER REVIEW 3 of 20

information that was provided to participants about the samples along with the consent form previous to the test was "Some samples may contain insects", due to the allergic reactions that these

*Foods* **2019** *8*, x FOR PEER REVIEW 3 of 20

previous to the test was "Some samples may contain insects", due to the allergic reactions that these

may cause. A power analysis was performed using SAS® Power and Sample Size version 14.1

software (SAS Institute Inc., Cary, NC, USA), showing thatthe number of participants per culture was sufficient to find significant differences among the samples and cultures (Power: 1 − β > 0.99).


**Table 1.** Image and description of samples used in the sensory session. **Table 1.** Image and description of samples used in the sensory session. Tortilla chip with cornflour (Control)

Tortilla chip with cornflour

(Control)

**Sample Image Sample Description**

Roasted crickets Roasted crickets **Table 2.** Descriptors and scale used in the questionnaire to acquire self‐reported responses. Questionnaires were uploaded in the BioSensory app, including sample numbers, descriptors, scales, **Table 2.** Descriptors and scale used in the questionnaire to acquire self‐reported responses. Questionnaires were uploaded in the BioSensory app, including sample numbers, descriptors, scales, and emoticons. **Table 2.** Descriptors and scale used in the questionnaire to acquire self-reported responses. Questionnaires were uploaded in the BioSensory app, including sample numbers, descriptors, scales, and emoticons. Texture <sup>15</sup> cm non‐ structured Dislike extremely‐Neither like nor dislike‐ Like extremely Texture Flavor <sup>15</sup> cm non‐ structured Dislike extremely‐Neither like nor dislike‐ Like extremely Flavor


Expressionl

Disappoint ed

app.

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

Happy Savori

Surprised Scared

ess Angry

Neutral Joy

Unamused Laughi

Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision

**ng**

ng

Confus ed

ng

app.

app.

app.

app.

Appearance <sup>15</sup> cm non‐

Appearancestructured

Appearance <sup>15</sup> cm non‐

Appearance <sup>15</sup> cm non‐

Appearance <sup>15</sup> cm non‐

Appearance FaceScale (0–

Appearance FaceScale(0–

Appearance FaceScale (0–

Appearance FaceScale (0–

Aroma <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

15 cm non‐

Texture <sup>15</sup> cm non‐

Texturestructured

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Appearance <sup>15</sup> cm non‐

structured

structured

structured

structured

structured

structured

structured

structured

Appearance <sup>15</sup> cm non‐

Appearance <sup>15</sup> cm non‐

Appearance <sup>15</sup> cm non‐

Appearance <sup>15</sup> cm non‐

*Foods* cm non‐

Appearance FaceScale (0–

Appearance FaceScale (0–

Appearance FaceScale (0–

Appearance FaceScale (0–

Appearance FaceScale (0–

structured

structured

structured

structured

structured

structured

structured

structured

structured

structured

structured

structured

structured

structured

cm ‐

structured

structured

Aroma <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Texture cm non‐

Texture <sup>15</sup> cm non‐

structured

structured

structured

structured

structured

structured

structured

structured

Aroma <sup>15</sup> cm non‐

intention

structured

structured

structured


**Table 2.** *Cont.* Dislike extremely‐Neither like nor dislike‐ Like extremely Texture Like extremely OL Like extremely OL Like extremely OL Like extremely OL Like extremely OL Like extremely OL Likestructured like nor ‐ Like extremely

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

AppearanceFaceScale(0–100) FS AppAroma <sup>15</sup> cmnon‐structured

100) FS App

100) FS App

100) FS App

100) FS App

FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

structured like nor ‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

non‐structuredlikenor Aromastructuredlikenon‐likenor

extremely‐ dislike

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

structured like nor ‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

extremely‐ dislike

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods***2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

LikeAppearanceAppearance100) AppAromacm‐

extremelyLike extremely AppearanceAppearance FaceScale(0–

structuredlikeFaceScale

REVIEW 4of20Appearance cm ‐

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Dislike extremely‐Neither like nor dislike‐

App

100) FS App

100) FS App

100) FS App

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislikeextremely‐Neitherlikenordislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislikeextremely‐Neitherlikenordislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Like extremely Appearance

Like extremely Appearance

Like extremely Appearance

Like extremely Appearance

100) FS App

extremely‐ dislikeLike Aroma

extremelyLike extremely Texture

Like extremely Aroma

Like extremely Aroma

Like extremely Aroma

Like extremely Aroma

Like extremely Texture

Like extremely Texture

Like extremely Texture

Like extremely Texture

LikeTexture

Like extremely Flavor

Like extremely Flavor

Like extremely Flavor

Like extremely Flavor

Dislike extremely‐Neither like nor dislike‐

extremely‐ dislikeLike Flavor

Like extremely Appearance

Like extremely Aroma

Like extremely Aroma

Like extremely Aroma

Like extremely Aroma

Like extremely Aroma

Like extremely Texture

Like extremely Texture

Like extremely Texture

Like extremely Texture

Like extremely Appearance

Like extremely Appearance

Like extremely

Like extremely Appearance

Like extremely Appearance

Like extremely Aroma

Like extremely Flavor

Like extremely Flavor

Like extremely Flavor

Like extremely Flavor

Flavor

Flavor

Like extremely PI

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory app. **Emoji Meaning Emoji Meani Emoji Meaning Emoji Meani Emoji Meaning Emoji Meani Emoji Meaning Emoji Meani Emoji Meaning Meani Emoji Meaning Emoji MeaniEmoji MeaniEmoji Meaning Emoji Meani ng Emoji Meaning Emoji Meani ng**


ng Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain thefacial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videoswere recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developedwith the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). Thelatter app can analyze videos inbatch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressionsfrom participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videos with the self answers participants the appa developedsoftware kit USA). The latter videos in batch to facialexpressionsparticipants on the‐ macro‐movementsthedifferent features face, histogram gradient vision Videos were recorded along with the self‐reported answers from participants using the BioSensoryapp andanalyzedusingasecondappdevelopedwiththeAffectivasoftware development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtainthefacialexpressionsfromparticipantsbased onthe micro‐ and macro‐movements ofthedifferent features of the face, using the histogram of the oriented gradient for computer vision participantstheBioSensoryappandanalyzedusingasecondappdevelopedwiththeAffectivasoftware latter obtainthefacialexpressionsfromparticipantsbasedonthemicro‐ andmacro‐movementsofthehistogramvisionNeutral DisappointedNeutralngVideosrecorded reportedfromusingBioSensoryandseconddevelopedAffectivadevelopment The cananalyzeobtainfacialfrombasedmicromacro‐movementsdifferentthethe gradient computer Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants basedon the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision Videoswere recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using <sup>a</sup> second app developed with the Affectiva softwaredevelopment kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participantsbased on the micro‐ and macro‐movements of thedifferent features of the face, using the histogram of the oriented gradient for computer vision Videos were recorded along with the self-reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro- and macro-movements of the different features of the face, using the histogram of the oriented gradient for computer vision analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app).

**Table 4.** Facial expressions and emotion-related parameters obtained from the biometric video analysis (Affectiva app). **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app). **Parameter Label Parameter Label Paramet Parameter Label Parameter Label Paramet er Label Parameter Label Parameter Label Paramet er Label**


**Raise** IBR **Lid**

**Raise** IBR **Lid**

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Wrinkle** NW **Jaw**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Rise** ULR **Pitch**

**Rise** ULR **Pitch**

**Open** MO

**Open** MO

**Open** MO

**Closure** EC

**Closure** EC

**Closure** EC

**Widen** EW

**Widen** EW

**Widen** EW

**Raise** CR

**Raise** CR

**Raise** CR

**Tighten** LT

**Tighten** LT

**Tighten** LT

**Drop** JD

**Drop** JD

**Drop** JD

SmirkFE

SmirkFE

SmirkFE

**Smirk Facial Expressi on**

**Smirk Facial Expressi on**

**Smirk Facial Expressi on**

**Eye**

**Eye**

**Eye**

**Surprise** Surprise **Disappointed Mouth**

**Surprise** Surprise **Disappointed Mouth**

**Surprise** Surprise **Disappointed Mouth**

**Stuck out tongue with winking eye**

**Stuck out tongue with winking eye**

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

**Contempt** Contempt **Laughing Eye**

**Contempt** Contempt **Laughing Eye**

**Valence** Valence **Kissing Cheek**

**Valence** Valence **Kissing Cheek**

**Valence** Valence **Kissing Cheek**

**tongue Chin Raise** CR **Yaw**

**tongue Chin Raise** CR **Yaw**

**tongue Chin Raise** CR **Yaw**

**Anger** Anger **Scared**

**Anger** Anger **Scared**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Smile** Smile **Inner Brow**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Smiley Nose**

**Smiley Nose**

**Relaxed Upper Lip**

**Relaxed Upper Lip**

**Relaxed Upper Lip**

**Stuck out**

**Stuck out**

**Stuck out**

**Fear** Fear

**Fear** Fear

**Fear** Fear

analysis (Affectiva app).

analysis (Affectiva app).

analysis (Affectiva app).

analysis (Affectiva app).

analysis (Affectiva app).

analysis (Affectiva app).

analysis (Affectiva app).

analysis (Affectiva app).

analysis (Affectiva app).

**Joy** Joy **Winking face**

**Joy** Joy **Winking face**

**Joy** Joy **Winking face**

**Joy** Joy **Winking face**

**Joy** Joy **Winking face**

analysis (Affectiva app).

analysis (Affectiva app).

**Joy** Joy **Winking face**

**Joy** Joy **Winking face**

**Joy** Joy **Winking face**


**Table 4.** *Cont.* **Sadness** Sadness **Smirk Lip Suck** LS **Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth Surprise** Surprise **Disappointed Mouth Surprise** Surprise **Disappointed Mouth Surprise** Surprise **Disappointed Mouth Surprise** Surprise **Disappointed Mouth**

**Sadness** Sadness **Smirk Lip Suck** LS

**Sadness** Sadness **Smirk Lip Suck** LS

**Sadness** Sadness **Smirk Lip Suck** LS

**Sadness** Sadness **Smirk Lip Suck** LS

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Disgust** Disgust **Rage Lip**

**Disgust** Disgust **Rage Lip**

**Sadness** Sadness **Smirk Lip Suck** LS

**Sadness** Sadness **Smirk Lip Suck** LS

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Parameter Label Parameter Label Paramet**

**Parameter Label Parameter Label Paramet**

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**er**

**Lip Corner Depress or**

**er**

**Lip Corner Depress or**

**er**

**Lip Corner Depress or**

**er**

**Lip Corner Depress or**

**er**

**er**

**Lip CornerDepress or**

**Lip Corner Depress or**

**er**

**er**

**Lip Corner Depress or**

> **Lip Corner Depress or**

**Label**

LCD

**Label**

LCD

LCD

**Label**

**Label**

**Label**

**Label**

**Label**

LCD

LCD

**Label**

**Label**

**Label**

**er**

**er**

**Lip Corner Depress or**

**Lip Corner Depress or**

LCD

LCD

LCD

LCD

LCD

**Press** LPr

**Press** LPr

**Open** MO

**Open** MO

**Press** LPr

**Open** MO

**Open** MO

**Open** MO

**Open** MO

**Press** LPr

**Press** LPr

**Press** LPr

**Press** LPr

**Press** LPr

**Press** LPr

**Press** LPr

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

**Surprise** Surprise **Disappointed Mouth**

**Surprise** Surprise **Disappointed Mouth**

**Disgust** Disgust **Rage Lip**

**Disgust** Disgust **Rage Lip**

**Disgust** Disgust **Rage Lip**

**Disgust** Disgust **Rage Lip**

**Disgust** Disgust **Rage Lip**

**Disgust** Disgust **Rage Lip**

**Parameter Label Parameter Label Paramet**

**Parameter Label Parameter Label Paramet**

**Parameter Label Parameter Label Paramet**

**Parameter Label Parameter Label Paramet**

**Parameter Label Parameter Label Paramet**

**Joy** Joy **Winking face**

**Joy** Joy **Winking face**

**Parameter Label Parameter Label Paramet**

**Parameter Label Parameter Label Paramet**

#### *2.2. Statistical Analysis and Machine Learning Modeling 2.2. Statistical Analysis and Machine Learning Modeling 2.2. Statistical Analysis and Machine Learning Modeling*

An ANOVA was conducted for the quantitative self‐reported and biometric responses with a Fisher's least significant differences (LSD) *post hoc* test to assess significant differences (α = 0.05) between samples nested within cultures (Asians and non‐Asians) using XLSTAT ver. 2020.3.1 (Addinsoft, New York, NY, USA). Furthermore, multivariate data analysis was conducted using principal component analysis (PCA) to find the relationships and associations among samples and variables from the quantitative self‐reported and biometric responses using Matlab® R2020a (Mathworks, Inc., Natick, MA, USA). On the other hand, to find relationships between the frequency responses from emojis using the CATA test and quantitative self‐reported and biometric responses as well as the associations of samples with each response, a multiple factor analysis (MFA) was conducted using XLSTAT. Furthermore, a correlation matrix was developed using Matlab® R2020a to assess only the significant correlations (*p* ≤ 0.05) between the quantitative self‐reported and biometric responses. The machine learning (ML) models based on artificial neural networks (ANN) pattern recognition were developed using a code written in Matlab® R2020a to evaluate 17 different training An ANOVA was conducted for the quantitative self‐reported and biometric responses with a Fisher's least significant differences (LSD) *post hoc* test to assess significant differences (α = 0.05) between samples nested within cultures (Asians andnon‐Asians) using XLSTAT ver. 2020.3.1 (Addinsoft, New York, NY, USA). Furthermore, multivariate data analysis was conducted using principal component analysis (PCA) to find therelationships and associations among samples and variables from the quantitative self‐reported and biometric responses using Matlab® R2020a (Mathworks, Inc., Natick, MA, USA). On the other hand, to find relationships between the frequency responses from emojis using the CATA test and quantitative self‐reported and biometric responses as well as the associations of samples with each response, a multiple factor analysis (MFA) was conducted using XLSTAT. Furthermore, a correlation matrix was developed using Matlab® R2020a to assess only the significant correlations (*p* ≤ 0.05) between the quantitativeself‐reported and biometric responses. The machine learning (ML) models based on artificial neural networks (ANN) pattern recognition were developed using a code written in Matlab® R2020a to evaluate 17 different training An ANOVA was conducted for the quantitative self-reported and biometric responses with a Fisher's least significant differences (LSD) *post hoc* test to assess significant differences (α = 0.05) between samples nested within cultures (Asians and non-Asians) using XLSTAT ver. 2020.3.1 (Addinsoft, New York, NY, USA). Furthermore, multivariate data analysis was conducted using principal component analysis (PCA) to find the relationships and associations among samples and variables from the quantitative self-reported and biometric responses using Matlab® R2020a (Mathworks, Inc., Natick, MA, USA). On the other hand, to find relationships between the frequency responses from emojis using the CATA test and quantitative self-reported and biometric responses as well as the associations of samples with each response, a multiple factor analysis (MFA) was conducted using XLSTAT. Furthermore, a correlation matrix was developed using Matlab® R2020a to assess only the significant correlations (*p* ≤ 0.05) between the quantitative self-reported and biometric responses.

algorithms. Bayesian Regularization was selected as the best algorithm, resulting in models with higher accuracy and no signs of overfitting from performance tests. A total of 45 inputs from the biometrics emotion analysis (Table 4) were used to classify the samples into low and high overall liking. Model 1 was developed using data from Asian participants, while Model 2 was constructed using data from non‐Asians. A further ML model (Model 3) was created as a general model, using the results from all the participants regardless of their cultural background. The data were divided randomly using 80% of the samples (samples x participants) for training and 20% for testing. The performance assessment was based on the mean squared error (MSE), and a neuron trimming test (3, 7, 10 neurons) was conducted to find the models with the best performance and no overfitting from performance tests. Figure 1 shows the diagram of the ANN models developed using a tan‐sigmoid function in the hidden layer and Softmax neurons in the output layer. algorithms. Bayesian Regularization was selected as the best algorithm, resulting in models with higheraccuracy and no signs of overfitting from performance tests. A total of 45 inputs from the biometrics emotion analysis (Table4) were used to classify the samples into low and high overall liking. Model 1 was developed using data from Asian participants, while Model 2 was constructed using data from non‐Asians. A further ML model (Model 3) was created as a general model, using the results from all the participantsregardless of their cultural background. The data were divided randomly using 80% of the samples (samples xparticipants) for training and 20% for testing. The performance assessment was based on the mean squared error (MSE), and a neuron trimming test (3, 7, 10 neurons) was conducted to find the models with the best performance and no overfitting from performance tests. Figure 1 shows the diagram of the ANN models developed using a tan‐sigmoid function in the hidden layer and Softmax neuronsin the output layer. The machine learning (ML) models based on artificial neural networks (ANN) pattern recognition were developed using a code written in Matlab® R2020a to evaluate 17 different training algorithms. Bayesian Regularization was selected as the best algorithm, resulting in models with higher accuracy and no signs of overfitting from performance tests. A total of 45 inputs from the biometrics emotion analysis (Table 4) were used to classify the samples into low and high overall liking. Model 1 was developed using data from Asian participants, while Model 2 was constructed using data from non-Asians. A further ML model (Model 3) was created as a general model, using the results from all the participants regardless of their cultural background. The data were divided randomly using 80% of the samples (samples x participants) for training and 20% for testing. The performance assessment was

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses*

number of inputs (Table 4), the outputs/targets, and the number of neurons.

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the

number of inputs (Table 4), the outputs/targets, and the number of neurons.

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses*

**3. Results**

**3. Results**

based on the mean squared error (MSE), and a neuron trimming test (3, 7, 10 neurons) was conducted to find the models with the best performance and no overfitting from performance tests. Figure 1 shows the diagram of the ANN models developed using a tan-sigmoid function in the hidden layer and Softmax neurons in the output layer. *Foods* **2019** *8*, x FOR PEER REVIEW 6 of 19

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the number of inputs (Table 4), the outputs/targets, and the number of neurons. **Figure 1.** Diagram of the artificial neural network two-layer feed-forward models showing the number of inputs (Table 4), the outputs/targets, and the number of neurons. **Joy** Joy **Winking face Corner Depress**

### **3. Results 3. Results**

**Flushed Lip Pucker** LP **Roll**

function in the hidden layer and Softmax neurons in the output layer.

between samples nested within cultures (Asians and non‐Asians) using XLSTAT ver. 2020.3.1

principal component analysis (PCA) to find the relationships and associations among samples and variables from the quantitative self‐reported and biometric responses using Matlab® R2020a (Mathworks, Inc., Natick, MA, USA). On the other hand, to find relationships between the frequency responses from emojis using the CATA test and quantitative self‐reported and biometric responses as well as the associations of samples with each response, a multiple factor analysis (MFA) was conducted using XLSTAT. Furthermore, a correlation matrix was developed using Matlab® R2020a to assess only the significant correlations (*p* ≤ 0.05) between the quantitative self‐reported and

The machine learning (ML) models based on artificial neural networks (ANN) pattern recognition were developed using a code written in Matlab® R2020a to evaluate 17 different training algorithms. Bayesian Regularization was selected as the best algorithm, resulting in models with higher accuracy and no signs of overfitting from performance tests. A total of 45 inputs from the biometrics emotion analysis (Table 4) were used to classify the samples into low and high overall liking. Model 1 was developed using data from Asian participants, while Model 2 was constructed using data from non‐Asians. A further ML model (Model 3) was created as a general model, using the results from all the participants regardless of their cultural background. The data were divided randomly using 80% of the samples (samples x participants) for training and 20% for testing. The performance assessment was based on the mean squared error (MSE), and a neuron trimming test (3, 7, 10 neurons) was conducted to find the models with the best performance and no overfitting from performance tests. Figure 1 shows the diagram of the ANN models developed using a tan‐sigmoid

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the

number of inputs (Table 4), the outputs/targets, and the number of neurons.

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses*

*2.2. Statistical Analysis and Machine Learning Modeling*

biometric responses.

**3. Results**

## *3.1. Results from the ANOVA of Self‐reported and Biometric Responses 3.1. Results from the ANOVA of Self-Reported and Biometric Responses* **Disgust** Disgust **Rage Lip**

Results from the ANOVA of the self‐reported responses showed that there were significant differences between samples for the eight descriptors considered (Table S1). However, non‐ significant differences were found between the cultures for appearance, FS App, aroma, texture, and overall liking. For flavor, overall liking, and purchase intention, there were significant differences between cultures for the tortilla chips made with cricket flour and the avocado toast with crickets. Likewise, for FS Taste, there were significant differences between cultures for the avocado toast with crickets. The control samples of tortilla chips and avocado toast, along with the tortilla chips made with cricket flour, were the most liked in terms of appearance and texture. For Asians, besides the control samples, the tortilla chips made with cricket flour were the most liked in terms of texture and overall but were significantly different from the control. Conversely, for non‐Asians, the tortilla chips made with cricket flour were amongst the highest in liking of all sensory descriptors as well as in the FaceScale rating, and non‐significant differences were found with the control samples. Similarly, for non‐Asians, the avocado toast with crickets was amongst the highest in flavor liking, with non‐ significant differences compared to the control samples. Results from the ANOVA of the self-reported responses showed that there were significant differences between samples for the eight descriptors considered (Table S1). However, non-significant differences were found between the cultures for appearance, FS App, aroma, texture, and overall liking. For flavor, overall liking, and purchase intention, there were significant differences between cultures for the tortilla chips made with cricket flour and the avocado toast with crickets. Likewise, for FS Taste, there were significant differences between cultures for the avocado toast with crickets. The control samples of tortilla chips and avocado toast, along with the tortilla chips made with cricket flour, were the most liked in terms of appearance and texture. For Asians, besides the control samples, the tortilla chips made with cricket flour were the most liked in terms of texture and overall but were significantly different from the control. Conversely, for non-Asians, the tortilla chips made with cricket flour were amongst the highest in liking of all sensory descriptors as well as in the FaceScale rating, and non-significant differences were found with the control samples. Similarly, for non-Asians, the avocado toast with crickets was amongst the highest in flavor liking, with non-significant differences compared to the control samples. *Foods* **2019** *8*, x FOR PEER REVIEW 6 of 20 **Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth Open** MO **Anger** Anger **Scared Smirk Facial Expressi on Fear** Fear **Stuck out tongue with Eye Closure** EC

The results of the ANOVA for the biometrics responses showed there were significant differences between the samples and/or cultures (Table S2). There were significant differences The results of the ANOVA for the biometrics responses showed there were significant differences between the samples and/or cultures (Table S2). There were significant differences between the cultures **winking eye**

**Contempt** Contempt **Laughing Eye**

between the cultures for head roll when assessing the avocado toast with crickets and

the whole crickets, with more negative results for the Asians, which means that they moved the head to the right side when evaluating these samples. For joy, there were significant differences between cultures, with the avocado toast control sample having higher expression of this emotion in Asians. The Asians also behaved significantly differently to the non‐Asians in terms of engagement when assessing the tortilla chips made with cricket flour, with the Asians being more engaged with the sample. The Asians expressed significantly more smiley faces when evaluating both the for head roll An ANOVA was conducted for the quantitative self‐reported and biometric responses with a Fisher's least significant differences (LSD) *post hoc* test to assess significant differences (α = 0.05) when assessing the avocado toast with crickets and the whole crickets, with more negative results for the Asians, which means that they moved the head to the right side when evaluating these samples. For joy, there were significant differences between cultures, with the avocado toast control sample having higher expression of this emotion in Asians. The Asians also behaved significantly differently to the non-Asians in terms of engagement when assessing the tortilla chips made with cricket flour, with the Asians being more engaged with the sample. The Asians **Valence** Valence **Kissing Cheek Raise** CR **Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT

avocado toast control and the toast with crickets than the non‐Asians. Likewise, the Asians expressed

**Smiley Nose**

**Relaxed Upper Lip**

(Addinsoft, New York, NY, USA). Furthermore, multivariate data analysis was conducted using expressed significantly more smiley faces

3.2.1. Principal Components Analysis

**Stuck out**

chips made with cricket flour.

*3.2. Multivariate Data Analysis*

significantly more stuck out tongue with winking eye expressions when assessing the tortilla **Wrinkle** NW **Jaw Drop** JD **tongue Chin Raise** CR **Yaw** when evaluating both the avocado toast control and

**Rise** ULR **Pitch**

**Label**

LCD

SmirkFE

**or**

**Press** LPr

**Widen** EW

**Label**

analysis (Affectiva app).

**Fear** Fear

**Smirk Facial Expressi on**

**Eye**

**er**

**Lip Corner Depress or**

**Stuck out**

analysis (Affectiva app).

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Fear** Fear

**Stuck out**

**3. Results**

biometric responses.

analysis (Affectiva app).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

**Surprise** Surprise **Disappointed Mouth**

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

**Valence** Valence **Kissing Cheek**

**tongue Chin Raise** CR **Yaw**

analysis (Affectiva app).

**Joy** Joy **Winking face**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**Stuck out**

**Fear** Fear

**Joy** Joy **Winking face**

the toast with crickets than the non-Asians. Likewise, the Asians expressed significantly more stuck out **Stuck out** *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 **winking eye Closure** EC **Parameter Label Parameter Label Paramet** translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Parameter Label Parameter Label Paramet**

**Sadness** Sadness **Smirk Lip Suck** LS

**Press** LPr

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Sadness** Sadness **Smirk Lip Suck** LS

**Open** MO

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to

**er**

**Parameter Label Parameter Label Paramet**

**Lip Corner Depress or**

**Disgust** Disgust **Rage Lip**

**Smirk Facial Expressi on**

**Stuck out tongue with**

**Surprise** Surprise **Disappointed Mouth**

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Eye**

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**tongue with**

**Label**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

**Label**

LCD

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Label**

LCD

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**er**

**Lip Corner Depress or**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Smirk Facial Expressi on**

**er**

**Lip Corner Depress or**

**Eye**

**Label**

**Sadness** Sadness **Smirk Lip Suck** LS

**Smirk Facial Expressi on**

**Eye**

LCD

**er**

**tongue with winking eye**

**Lip**

**or**

**Facial**

**on**

**Eye**

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

SmirkFE

**Label**

LCD

SmirkFE

LCD

SmirkFE

**Smirk Facial Expressi on**

**Eye**

**Label**

**er**

**Lip Corner Depress or**

**Label**

LCD

SmirkFE

**Press** LPr

**Open** MO

**Closure** EC

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

**Press** LPr

**Open** MO

**Closure** EC

**Press** LPr

**Widen** EW

**Raise** CR

**Open** MO

**Tighten** LT

**Drop** JD

**Closure** EC

**Widen** EW

**Label**

**Raise** CR

LCD

**Tighten** LT

**Drop** JD

SmirkFE

**Parameter Label Parameter Label Paramet**

**Press** LPr

**er**

**Lip Corner Depress or**

**Disgust** Disgust **Rage Lip**

**Smirk Facial Expressi on**

**er**

**Lip Corner Depress or**

**Surprise** Surprise **Disappointed Mouth**

**Press** LPr

**Eye**

analysis (Affectiva app).

**Stuck out tongue with**

**Label**

**Raise** CR

SmirkFE

**Label**

LCD

SmirkFE

**Sadness** Sadness **Smirk Lip Suck** LS

SmirkFE

**Open** MO

LCD

**Joy** Joy **Winking face**

analysis (Affectiva app).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

SmirkFE

tongue with winking eye **tongue with winking eye** translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 **Press** LPr **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app). analysis (Affectiva app).

**Closure** EC expressions when assessing the tortilla chips made with cricket flour. analysis. Additionally, it uses machine learning algorithms based on support vector machine to **Anger** Anger **Scared Contempt** Contempt **Laughing Eye Widen** EW **Surprise** Surprise **Disappointed Mouth Open** MO **Joy** Joy **Winking face**

**Joy** Joy **Winking face**

**er**

**Lip Corner**

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Smirk Facial Expressi**

**er**

**Lip**

**Open** MO

**Corner**

**Fear** Fear

**Facial Expressi**

**Stuck out tongue with**

SmirkFE

**Table 4.** Facialexpressions and emotion‐related parameters obtained from the biometric video

**Widen** EW

**Raise** CR

**Open** MO

**Raise** CR

**Smiley Nose**

**Facial Expressi on**

**Tighten** LT

**Smirk Facial Expressi on**

**Eye**

**Drop** JD

SmirkFE

**Drop** JD

**Raise** IBR **Lid**

**Closure** EC

**Wrinkle** NW **Jaw**

**Widen** EW

**Rise** ULR **Pitch**

**Tighten** LT

**Drop** JD

**Raise** CR

**Rise** ULR **Pitch**

**Valence** Valence **Kissing Cheek**

**Stuck out**

SmirkFE

**tongue Chin Raise** CR **Yaw**

#### *3.2. Multivariate Data Analysis* analysis. Additionally, it uses machine learning algorithms based on support vector machine to **Sadness** Sadness **Smirk Lip Suck** LS **Parameter Label Parameter Label Paramet Parameter Label Parameter Label Paramet**

#### **Contempt** Contempt **Laughing Eye** 3.2.1. Principal Components Analysis analysis (Affectiva app). translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Valence** Valence **Kissing Cheek Anger** Anger **Scared**

**Fear** Fear

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

**Anger** Anger **Scared**

**Fear** Fear

**Sadness** Sadness **Smirk Lip Suck** LS

analysis (Affectiva app).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

analysis (Affectiva app).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Joy** Joy **Winking face**

**Anger** Anger **Scared**

**Joy** Joy **Winking face**

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

**Surprise** Surprise **Disappointed Mouth**

**Valence** Valence **Kissing Cheek Raise** CR Figure 2a shows the PCA from Asians, which explains 74.23% of the total data variability (PC1 = 44.18%; PC2: 30.05%). According to the factor loadings (FL), the principal component one (PC1) was mainly represented by roll (FL = 0.27), flavor (FL = 0.27), and texture (FL = 0.25) on the **Parameter Label Parameter Label Paramet er Label Lip winking eye Contempt** Contempt **Laughing Eye Table 4.** Facial expressions and emotion‐relatedparameters obtained from the biometric video **Parameter Label Parameter Label Paramet Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT **on Stuck out Eye** *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to **Disgust** Disgust **Rage Lip Open** MO **Joy** Joy **Winking face Corner Depress or** LCD **Disgust** Disgust **Rage Lip Joy** Joy **Winking face Depress or** LCD

**Widen** EW

**Fear** Fear

analysis (Affectiva app).

**Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT positive side of the axis, and by winking face **Joy** Joy **Winking face Corner Depress or** LCD (FL = −0.26), disgust (FL = −0.24), and sadness **er Lip Smiley Nose Wrinkle** NW **Jaw Drop** JD **winking eye Closure** EC *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Sadness** Sadness **Smirk Lip Suck** LS SmirkFE **Disgust** Disgust **Rage Lip Press** LPr **Disgust** Disgust **Rage Lip Press** LPr

**Label**

**Smiley Nose Wrinkle** NW **Jaw Disgust** Disgust **Rage Lip Press** LPr (FL = −0.24) on the negative side. On the other side, PC2 was mainly represented by kissing **Valence** Valence **Kissing Cheek Joy** Joy **Winking face Depress** LCD **Contempt** Contempt **Laughing Eye Widen** EW analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). analysis (Affectiva app). **Surprise** Surprise **Disappointed Mouth Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth Sadness** Sadness **Smirk Lip Suck** LS

**Surprise** Surprise **Disappointed Mouth**

**Anger** Anger **Scared**

**Joy** Joy **Winking face**

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Corner**

**or**

**Contempt** Contempt **Laughing Eye**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**Stuck out**

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Fear** Fear

**Rise** ULR **Pitch**

**tongue Chin Raise** CR **Yaw**

**Smiley Nose**

**Sadness** Sadness **Smirk Lip Suck** LS

analysis (Affectiva app).

**Stuck out**

function in the hidden layer and Softmax neurons in the output layer.

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**Fear** Fear

**tongue Chin Raise** CR **Yaw**

**winking eye**

principal component analysis (PCA) to find the relationships and associations among samples and variables from the quantitative self‐reported and biometric responses using Matlab® R2020a (Mathworks, Inc., Natick, MA, USA). On the other hand, to find relationships between the frequency

algorithms. Bayesian Regularization was selected as the best algorithm, resulting in models with

**Anger** Anger **Scared**

7, 10 neurons) was conducted to find the models with the best performance and no overfitting from performance tests. Figure 1 shows the diagram of the ANN models developed using a tan‐sigmoid

**Valence** Valence **Kissing Cheek**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the

**tongue Chin Raise** CR **Yaw**

**Relaxed Upper Lip**

**Stuck out**

**3. Results**

**Stuck out**

number of inputs (Table 4), the outputs/targets, and the number of neurons.

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses*

**Stuck out**

**Stuck out**

**Flushed Lip Pucker** LP **Roll**

**Relaxed Upper Lip Rise** ULR **Pitch Sadness** Sadness **Smirk Lip Suck** LS **Smile** Smile **Inner Brow Raise** IBR **Lid Smiley Nose Wrinkle** NW **Jaw** (FL = 0.32), rage **Disgust** Disgust **Rage Lip Press** LPr **Sadness** Sadness **Smirk Lip Suck** LS (FL = 0.30), and anger (FL = 0.30) on the positive side, and pitch **Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw** (FL = −0.19) on the negative side of the axis. It can be observed that both control samples (avocado toast and tortilla chip) were associated with a higher liking of flavor, purchase intention, overall liking, **Valence** Valence **Kissing Cheek Raise** CR **Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT *Foods* **2019** *8*, x FOR PEER REVIEW 6 of 20 **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app). **Parameter Label Parameter Label Paramet er Label Lip er Joy** Joy **Winking face Lip Corner Depress or Anger** Anger **Scared** *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Closure** EC **Widen** EW **Surprise** Surprise **Disappointed Mouth Open** MO **Smirk Anger** Anger **Scared Stuck out Surprise** Surprise **Disappointed Mouth Open** MO **Smirk Facial** *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Drop** JD

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

**Expressi**

**Fear** Fear

**tongue Chin Raise** CR **Yaw Anger** Anger **Scared Smirk Facial Expressi on** SmirkFE **Fear** Fear **Stuck out tongue with winking eye Eye Closure** EC **Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw Surprise** Surprise **Disappointed Mouth Open** MO **Anger** Anger **Scared Smirk Facial Expressi on** SmirkFE **Stuck out tongue with Eye Closure** EC valence, smiley face **Smiley Nose Wrinkle** NW **Jaw Drop** JD **Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw** , and roll *2.2. Statistical Analysis and Machine Learning Modeling* An ANOVA was conducted for the quantitative self‐reported and biometric responses with a Fisher's least significant differences (LSD) *post hoc* test to assess significant differences (α = 0.05) between samples nested within cultures (Asians and non‐Asians) using XLSTAT ver. 2020.3.1 (Addinsoft, New York, NY, USA). Furthermore, multivariate data analysis was conducted using , while the tortilla chip made with cricket flour was associated with emotions and emojis such as anger, rage **Depress or Disgust** Disgust **Rage Lip Press** LPr **Sadness** Sadness **Smirk Lip Suck** LS , smirk **Press** LPr **Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth Open** MO **Smirk** , and laughing **winking eye Contempt** Contempt **Laughing Eye Valence** Valence **Kissing Cheek Smile** Smile **Inner Brow Raise** IBR **Lid** . On the other hand, the avocado toast with crickets was associated with winking face analysis (Affectiva app). **Parameter Label Parameter Label Paramet Joy** Joy **Winking face Corner Depress Disgust** Disgust **Rage Lip Press** LPr , disgust, sadness, and flushed face *Foods* **2019** *8*, x FOR PEER REVIEW 6 of 20 **Flushed Lip Pucker** LP **Roll** , while the whole crickets were associated with disgust and pitch **Raise** CR **Tighten** LT **Drop** JD **on Fear** Fear **Stuck out tongue with winking eye Eye Closure** EC **Contempt** Contempt **Laughing Eye Widen** EW **Contempt** Contempt **Laughing Eye Valence** Valence **Kissing Cheek Smile** Smile **Inner Brow Raise** IBR **Lid on Fear** Fear **Stuck out tongue with winking eye Eye Closure** EC **Contempt** Contempt **Laughing Eye Widen** EW **Valence** Valence **Kissing Cheek** analysis (Affectiva app). **Parameter Label Parameter Label Paramet er Label Joy** Joy **Winking face Lip Corner Depress or** LCD **Disgust** Disgust **Rage Lip Press** LPr **Parameter Label Parameter Label Paramet er Label Joy** Joy **Winking face Lip Corner Depress or** LCD **Disgust** Disgust **Rage Lip Press** LPr *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Valence** Valence **Kissing Cheek Raise** CR **Contempt** Contempt **Laughing Eye Widen** EW responses from emojis using the CATA test and quantitative self‐reported and biometric responses as well as the associations of samples with each response, a multiple factor analysis (MFA) was conducted using XLSTAT. Furthermore, a correlation matrix was developed using Matlab® R2020a **Anger** Anger **Scared Smirk Facial Expressi on** SmirkFE **Fear** Fear **Stuck out tongue with Eye Closure** EC **Smiley Nose Wrinkle** NW **Jaw Sadness** Sadness **Smirk Lip Suck** LS *2.2. Statistical Analysis and Machine Learning Modeling* An ANOVA was conducted for the quantitative self‐reported and biometric responses with a Fisher's least significant differences (LSD) *post hoc* test to assess significant differences (α = 0.05) . Figure 2b shows the PCA for non-Asians, which explained 70.43% of the total data variability (PC1 = 50.98%; PC2 = 19.45%). The PC1 was mainly represented on the positive side of the axis by **Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT **Wrinkle** NW **Jaw Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT **Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth Surprise** Surprise **Disappointed Mouth Open** MO **Parameter Label Parameter Label Paramet er Label Joy** Joy **Winking face Lip Corner** LCD

**Surprise** Surprise **Disappointed Mouth**

**Anger** Anger **Scared**

**Valence** Valence **Kissing Cheek**

**Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT **Valence** Valence **Kissing Cheek Raise** CR **Smile** Smile **Inner Brow Raise** IBR **Lid** to assess only the significant correlations (*p* ≤ 0.05) between the quantitative self‐reported and The machine learning (ML) models based on artificial neural networks (ANN) pattern recognition were developed using a code written in Matlab® R2020a to evaluate 17 different training **Fear** Fear **Stuck out tongue with winking eye Eye Closure** EC **winking eye Contempt** Contempt **Laughing Eye Widen** EW **Relaxed Upper Lip Rise** ULR **Pitch Surprise** Surprise **Disappointed Mouth Open** MO **Smirk** between samples nested within cultures (Asians and non‐Asians) using XLSTAT ver. 2020.3.1 (Addinsoft, New York, NY, USA). Furthermore, multivariate data analysis was conducted using principal component analysis (PCA) to find the relationships and associations among samples and variables from the quantitative self‐reported and biometric responses using Matlab® R2020a joy (FL = 0.25), smiley **Smiley Nose Wrinkle** NW **Jaw Drop** JD (FL = 0.25), engagement (FL = 0.24), and relaxed **Relaxed Upper Lip Rise** ULR **Pitch** (FL = 0.24), and on the negative side by the self-reported responses (FL = 0.24) appearance, texture, flavor, overall **Smiley Nose Wrinkle** NW **Jaw Drop** JD **Open** MO **Smirk Facial Anger** Anger **Scared Smirk Facial Expressi** SmirkFE **Depress or Disgust** Disgust **Rage Lip Press** LPr

**Anger** Anger **Scared**

**Expressi**

(Mathworks, Inc., Natick, MA, USA). On the other hand, to find relationships between the frequency

**Tighten** LT

**on**

**Wrinkle** NW **Jaw Drop** JD **Relaxed Upper Lip Rise** ULR **Pitch Smiley Nose Wrinkle** NW **Jaw Drop** JD higher accuracy and no signs of overfitting from performance tests. A total of 45 inputs from the biometrics emotion analysis (Table 4) were used to classify the samples into low and high overall liking. Model 1 was developed using data from Asian participants, while Model2 was constructed using data from non‐Asians. A further ML model (Model 3) was created as a general model, using the results from all the participants regardless of their cultural background. The data were divided **Contempt** Contempt **Laughing Eye Widen** EW **Valence** Valence **Kissing Cheek Raise** CR **Valence** Valence **Kissing Cheek Raise** CR **Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT **Stuck out tongue Chin Raise** CR **Yaw Expressi Fear** Fear **Stuck out tongue with winking eye Closure** EC responses from emojis using the CATA test and quantitative self‐reported and biometric responses as well as the associations of samples with each response, a multiple factor analysis (MFA) was conducted using XLSTAT. Furthermore, a correlation matrix was developed using Matlab® R2020a to assess only the significant correlations (*p* ≤ 0.05) between the quantitative self‐reported and biometric responses. **Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw tongue Chin Raise** CR **Yaw** liking, and purchase intention. The PC2 was mainly represented by pitch **Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw** (FL = 0.38) and stuck out tongue with winking eye **on Fear** Fear **Stuck out tongue with winking eye Eye Closure** EC (FL = 0.21) on the positive side of the axis, and by surprise **Stuck out tongue with winking eye Eye Closure** EC **Contempt** Contempt **Laughing Eye Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth**

**Relaxed Upper Lip Rise** ULR **Pitch** randomly using 80% of the samples (samples x participants) for training and 20% for testing. The performance assessment was based on the mean squared error (MSE), and a neuron trimming test (3, **Smiley Nose Wrinkle** NW **Jaw Contempt** Contempt **Laughing Eye** The machine learning (ML) models based on artificial neural networks (ANN) pattern recognition were developed using a code written in Matlab® R2020a to evaluate 17 different training **Contempt** Contempt **Laughing Eye Widen** EW (FL = −0.34), laughing **Widen** EW (FL = −0.34), and disappointed **Open** MO (FL = −0.34) on the negative side.

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Raise** CR

**Rise** ULR **Pitch**

**Drop** JD

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Tighten** LT

**tongue Chin Raise** CR **Yaw**

**Smile** Smile **Inner Brow**

liking. Model 1 was developed using data from Asian participants, while Model 2 was constructed using data from non‐Asians. A further ML model (Model 3) was created as a general model, using the results from all the participants regardless of their cultural background. The data were divided randomly using 80% of the samples (samples x participants) for training and 20% for testing. The performance assessment was based on the mean squared error (MSE), and a neuron trimming test (3, 7, 10 neurons) was conducted to find the models with the best performance and no overfitting from performance tests. Figure 1 shows the diagram of the ANN models developed using a tan‐sigmoid

algorithms. Bayesian Regularization was selected as the best algorithm, resulting in models with

**Smiley Nose**

**Relaxed Upper Lip**

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the

**Tighten** LT

**Stuck out tongue with winking eye**

**Drop** JD

**tongue Chin Raise** CR **Yaw**

**Valence** Valence **Kissing Cheek**

**Contempt** Contempt **Laughing Eye**

number of inputs (Table 4), the outputs/targets, and the number of neurons.

**tongue Chin Raise** CR **Yaw**

function in the hidden layer and Softmax neurons in the output layer.

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Stuck out**

**Rise** ULR **Pitch**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses*

**Relaxed Upper Lip**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Stuck out**

**Stuck out tongue with winking eye**

**tongue Chin Raise** CR **Yaw**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Drop** JD

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**er**

**Lip Corner Depress or**

**Smirk Facial Expressi on**

**Eye**

**Label**

LCD

SmirkFE

analysis (Affectiva app).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**er**

**Surprise** Surprise **Disappointed Mouth**

**Lip Corner Depress or**

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**er**

**Label**

LCD

**Press** LPr

**Sadness** Sadness **Smirk Lip Suck** LS

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Lip Corner Depress or**

**Smirk Facial Expressi on**

**Lip Corner Depress or**

**er**

SmirkFE

**Eye**

**Smirk Facial Expressi on**

**Eye**

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Label**

**Smirk Facial Expressi on**

**Eye**

**er**

**Lip Corner Depress or**

**Label**

LCD

SmirkFE

**Press** LPr

**Open** MO

LCD

**Closure** EC

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

SmirkFE

LCD

SmirkFE

**Label**

**Press** LPr

**Open** MO

**Closure** EC

**Press** LPr

**Widen** EW

**Raise** CR

**Open** MO

**Tighten** LT

**Drop** JD

**Closure** EC

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

**Parameter Label Parameter Label Paramet**

**Anger** Anger **Scared**

**Joy** Joy **Winking face**

**Disgust** Disgust **Rage Lip**

**Sadness** Sadness **Smirk Lip Suck** LS

**Press** LPr

**Open** MO

**Closure** EC

**Widen** EW

**Raise** CR

**Parameter Label Parameter Label Paramet**

analysis (Affectiva app).

**Joy** Joy **Winking face**

**Tighten** LT

**Drop** JD

**Disgust** Disgust **Rage Lip**

**Joy** Joy **Winking face**

**Sadness** Sadness **Smirk Lip Suck** LS

**Fear** Fear

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

**er**

**Lip Corner Depress or**

**Surprise** Surprise **Disappointed Mouth**

**Press** LPr

**Open** MO

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

**Smirk Facial Expressi on**

**Eye**

**Valence** Valence **Kissing Cheek**

**Closure** EC

**Widen** EW

**Raise** IBR **Lid**

analysis (Affectiva app).

SmirkFE

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Sadness** Sadness **Smirk Lip Suck** LS

**Label**

LCD

analysis (Affectiva app).

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

**Surprise** Surprise **Disappointed Mouth**

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

**Valence** Valence **Kissing Cheek**

analysis (Affectiva app).

**Joy** Joy **Winking face**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

analysis (Affectiva app).

**Smiley Nose**

**Joy** Joy **Winking face**

**Relaxed Upper Lip**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**Stuck out**

**Fear** Fear

**Stuck out**

**Fear** Fear

**Joy** Joy **Winking face**

**Anger** Anger **Scared**

**Fear** Fear

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Sadness** Sadness **Smirk Lip Suck** LS

The control sample of the tortilla chip was associated with yaw **tongue Chin Raise** CR **Yaw** , and disappointed **Surprise** Surprise **Disappointed Mouth Anger** Anger **Scared Fear** Fear **Stuck out tongue with winking eye Contempt** Contempt **Laughing Eye Valence** Valence **Kissing Cheek Smile** Smile **Inner Brow Raise** IBR **Lid Smiley Nose Wrinkle** NW **Jaw Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw** , while the control sample of avocado toast and tortilla chip with cricket flour were mainly linked with the liking of aroma and flavor, as well as disgust. On the other hand, the avocado toast with crickets was associated with pitch **Raise** IBR **Lid Tighten** LT **Wrinkle** NW **Jaw Drop** JD **Rise** ULR **Pitch tongue Chin Raise** CR **Yaw** , stuck out tongue with winking eye **Surprise** Surprise **Disappointed Mouth Open** MO **Anger** Anger **Scared Smirk Facial Expressi on Fear** Fear **Stuck out tongue with winking eye Eye Closure** EC **Contempt** Contempt **Laughing Eye Widen** EW **Valence** Valence **Kissing Cheek Raise** CR **Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT **Smiley Nose Wrinkle** NW **Jaw Drop** JD **Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw** , disgust, and smirk **Parameter Label Parameter Label Paramet er Label Lip Corner Depress or** LCD **Disgust** Disgust **Rage Lip Press** LPr **Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth Open** MO **Smirk Facial Expressi on** SmirkFE **Eye Closure** EC **Contempt** Contempt **Laughing Eye Widen** EW **Valence** Valence **Kissing Cheek Raise** CR **Raise** IBR **Lid Tighten** LT , while the whole crickets were linked with joy, valence, relaxed **Valence** Valence **Kissing Cheek Smile** Smile **Inner Brow Raise** IBR **Lid Smiley Nose Wrinkle** NW **Jaw Relaxed Upper Lip Rise** ULR **Pitch Stuck out tongue Chin Raise** CR **Yaw** , anger, rage **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app). **Parameter Label Parameter Label Paramet Joy** Joy **Winking face Disgust** Disgust **Rage Lip Sadness** Sadness **Smirk Lip Suck** LS **Surprise** Surprise **Disappointed Mouth Anger** Anger **Scared Fear** Fear **Stuck out tongue with winking eye Contempt** Contempt **Laughing Eye Valence** Valence **Kissing Cheek** , engagement, and attention. *Foods* **2019** *8*, x FOR PEER REVIEW 8 of 19

> **(a) Figure 2.** *Cont.*

**Stuck out**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**tongue Chin Raise** CR **Yaw**

**(b)**

**Figure 2.** Principal components analysis for the biometric and quantitative self‐reported responses for (**a**) Asians and (**b**) non‐Asians. Abbreviations: PC1 and PC2: principal components one and two. **Figure 2.** Principal components analysis for the biometric and quantitative self-reported responses for (**a**) Asians and (**b**) non-Asians. Abbreviations: PC1 and PC2: principal components one and two. analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4). *Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20 **Fear** Fear **Stuck out tongue with Eye Closure** EC

#### 3.2.2. Correlation Analysis 3.2.2. Correlation Analysis **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

Figure 3a shows the significant correlations (*p* ≤ 0.05) between the quantitative self‐reported and biometric emotional responses for Asians. It can be observed that roll was positively correlated with the liking of appearance (correlation coefficient: *r* = 0.88), aroma (*r* = 0.92), texture (*r* = 0.93), flavor (*r* = 0.98), overall liking (*r* = 0.92), FS Taste (*r* = 0.91), and purchase intention (*r* = 0.91). The liking of Figure 3a shows the significant correlations (*p* ≤ 0.05) between the quantitative self-reported and biometric emotional responses for Asians. It can be observed that roll was positively correlated with the liking of appearance (correlation coefficient: *r* = 0.88), aroma (*r* = 0.92), texture (*r* = 0.93), flavor (*r* = 0.98), overall liking (*r* = 0.92), FS Taste (*r* = 0.91), and purchase intention (*r* = 0.91). The liking of aroma analysis (Affectiva app). **Parameter Label Parameter Label Paramet er Label Lip Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app). **Parameter Label Parameter Label Paramet Contempt** Contempt **Laughing Eye Widen** EW **Valence** Valence **Kissing Cheek**

was negatively correlated with winking face **Joy** Joy **Winking face**

aroma was negatively correlated with winking face (*r* = −0.94), and disgust (*r* = −0.95), while **Depress or** LCD (*r* = −0.94), and disgust (*r* = −0.95), while the **er Lip Corner Raise** CR

**Label**

**Smirk Facial Expressi on**

**er**

**Lip Corner Depress or**

**Label**

LCD

SmirkFE

**Press** LPr

**Open** MO

SmirkFE

**Corner**

**winking eye**

**Smirk Facial Expressi on**

**tongue Chin Raise** CR **Yaw**

**Eye**

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Closure** EC

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

the liking of flavor had a negative correlation with winking face (*r* = 0.93), and the latter with FS Taste (*r* = −0.88). On the other hand, Figure 3b shows the significant correlations (*p* ≤ 0.05) found for non‐Asians. It can be observed that joy had a negative correlation with the liking of appearance (*r* = −0.94), FS App (*r* = −0.95), the liking of texture (*r* = −0.97), flavor (*r* = −0.89), overall liking (*r* = −0.94), FS Taste (*r* = −0.89), and purchase intention (*r* = −0.96). Valence had a negative correlation with appearance liking (*r* = −0.97), FS App (*r* = −0.97), texture liking (*r* = −0.96), overall liking (*r* = −0.95), and FS Taste (*r* = −0.96). Similarly, relaxed was negatively correlated with appearance liking (*r* **Disgust** Disgust **Rage Lip Press** LPr **Sadness** Sadness **Smirk Lip Suck** LS liking of flavor had a negative correlation with winking face **Joy** Joy **Winking face Depress or** LCD **Disgust** Disgust **Rage Lip Press** LPr (*r* = 0.93), and the latter with FS Taste (*r* = −0.88). On the other hand, Figure 3b shows the significant correlations (*p* ≤ 0.05) found for non-Asians. It can be observed that joy had a negative correlation with the liking of appearance (*r* = −0.94), FS App (*r* = −0.95), the liking of texture (*r* = −0.97), flavor (*r* = −0.89), overall liking (*r* = −0.94), FS Taste (*r* = −0.89), and purchase intention (*r* = −0.96). Valence had a negative correlation with appearance liking (*r* = −0.97), FS App (*r* = −0.97), texture liking (*r* = −0.96), overall liking (*r* = −0.95), **Smile** Smile **Inner Brow Raise** IBR **Lid Tighten** LT **Smiley Nose Wrinkle** NW **Jaw Drop** JD

$$\text{and FS Taste } (r = -0.96). \text{ Similarly, relaxed} \\ \text{\(\underbrace{\bullet \bullet}\_{\text{was negativly co}}\)} \\ \text{or} \\ \text{and} \\ \text{"Very" } (r = -0.96) \\ \text{and } (r = -0.96) \\ \text{and } (r = -0.96)$$

**Fear** Fear

**Stuck out tongue with winking eye**

**Anger** Anger **Scared**

**Stuck out**

**Contempt** Contempt **Laughing Eye**

**Valence** Valence **Kissing Cheek**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**tongue Chin Raise** CR **Yaw**

**Relaxed Upper Lip**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**Stuck out**

**Stuck out**

**Fear** Fear

**Open** MO **Rise** ULR **Pitch** was negatively correlated with appearance liking

SmirkFE

**Smirk Facial Expressi on**

**Eye**

**Open** MO

**Closure** EC

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

**Raise** IBR **Lid**

**Valence** Valence **Kissing Cheek**

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**tongue Chin Raise** CR **Yaw**

= −0.92), FS App (*r* = −0.94), texture liking (*r* = −0.96), overall liking (*r* = −0.90), and purchase intention

**Sadness** Sadness **Smirk Lip Suck** LS

intention (*r* = −0.90).

(*r* = −0.92), FS App (*r* = −0.94), texture liking (*r* = −0.96), overall liking (*r* = −0.90), and purchase intention (*r* = −0.93). Smiley has a negative correlation with appearance liking (*r* = −0.89), FS App (*r* = −0.88), liking of aroma (*r* = −0.89), texture (*r* = −0.91), flavor (*r* = 0.92), overall liking (*r* = −0.90), and purchase intention (*r* = −0.90). *Foods* **2019** *8*, x FOR PEER REVIEW 10 of 19 (*r* = −0.93). Smiley has a negative correlation with appearance liking (*r* = −0.89), FS App (*r* = −0.88), liking of aroma (*r* = −0.89), texture (*r* = −0.91), flavor (*r* = 0.92), overall liking (*r* = −0.90), and purchase

**Figure 3.** Matrices showing the significant (*p* ≤ 0.05) correlations between the quantitative self‐ reported and biometric emotional responses for (**a**) Asians and (**b**) non‐Asians. Color bar: blue side depicts the positive correlations, while the yellow side represents the negative correlations; likewise, darker blue and yellow denote higher correlations. 3.2.3. Multiple Factor Analysis **Figure 3.** Matrices showing the significant (*p* ≤ 0.05) correlations between the quantitative self-reported and biometric emotional responses for (**a**) Asians and (**b**) non-Asians. Color bar: blue side depicts the positive correlations, while the yellow side represents the negative correlations; likewise, darker blue and yellow denote higher correlations.

on the positive side of the axis by rage (FL = 0.97) and kissing (FL = 0.93), and on the

negative side by unamused (FL = −0.49) and pitch (FL = −0.42). It can be observed that the control samples (tortilla chip and avocado toast) were associated mainly with self‐reported responses such as the liking of flavor, aroma, overall liking, and purchase intention, as well as some

Figure 4a shows the MFA of Asian consumers, which explained a total of 76.61% (Factor 1: F1 =

app.

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Parameter Label Parameter Label Paramet**

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

**Anger** Anger **Scared**

**Parameter Label Parameter Label Paramet**

**er**

**or**

**Stuck out**

**Smiley Nose**

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

**Relaxed Upper Lip**

number of inputs (Table 4), the outputs/targets, and the number of neurons.

**Valence** Valence **Kissing Cheek**

**tongue Chin Raise** CR **Yaw**

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses*

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**Stuck out**

**Fear** Fear

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the

**tongue Chin Raise** CR **Yaw**

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Wrinkle** NW **Jaw**

obtain the facial expressions from participants based on the micro‐ and macro‐movements of the

**Eye**

obtain the facial expressions from participants based on the micro‐ and macro‐movements of the

**Closure** EC

**Widen** EW

**Raise** CR

obtain the facial expressions from participants based on the micro‐ and macro‐movements of the

**Rise** ULR **Pitch**

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

**Surprise** Surprise **Disappointed Mouth**

**Emoji Meaning Emoji Meani**

Purchase intention

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Dislike extremely‐Neither like nor dislike‐

100) FS App

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

Dislike extremely‐Neither like nor dislike‐

Appearance <sup>15</sup> cm non‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Appearance FaceScale (0–

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

Dislike extremely‐Neither like nor dislike‐

Aroma <sup>15</sup> cm non‐

100) FS Taste

Texture <sup>15</sup> cm non‐

Dislike extremely‐Neither like nor dislike‐

**Sadness** Sadness **Smirk Lip Suck** LS

Tasting FaceScale (0–

Overall liking <sup>15</sup> cm non‐

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

Flavor <sup>15</sup> cm non‐

Like extremely Appearance

Like extremely Aroma

structured

structured

structured

structured

structured

Like extremely Texture

Like extremely Flavor

Like extremely OL

analysis (Affectiva app).

Appearance <sup>15</sup> cm non‐

**er**

**Lip Corner Depress or**

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Appearance FaceScale (0–

Aroma <sup>15</sup> cm non‐

**Joy** Joy **Winking face**

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Tasting FaceScale (0–

**Emoji Meaning Emoji Meani**

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

Overall liking <sup>15</sup> cm non‐

structured

**Eye**

**Label**

structured

structured

**Smirk Facial Expressi on**

Purchase intention

**Anger** Anger **Scared**

**er**

**Lip Corner Depress or**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

Appearance FaceScale (0–

Appearance <sup>15</sup> cm non‐

structured

analysis. Additionally, it uses machine learning algorithms based on support vector machine to

Aroma <sup>15</sup> cm non‐

structured

structured

Dislike extremely‐Neither like nor dislike‐

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

analysis (Affectiva app).

100) FS Taste

Appearance <sup>15</sup> cm non‐

Dislike extremely‐Neither like nor dislike‐

Appearance FaceScale (0–

**Parameter Label Parameter Label Paramet**

**Press** LPr

structured

structured

*Foods* **2019** *8*, x FOR PEERREVIEW 4 of 20

**Disgust** Disgust **Rage Lip**

**Open** MO

structured

structured

analysis (Affectiva app).

**Surprise** Surprise **Disappointed Mouth**

**Label**

LCD

**Closure** EC

15 cm non‐ structured

Happy Savori

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

**Widen** EW

Like extremely PI

**ng**

15 cm non‐ structured

Appearance <sup>15</sup> cm non‐

Appearance FaceScale (0–

Appearance <sup>15</sup> cm non‐

ng

Appearance <sup>15</sup> cm non‐

Appearance FaceScale (0–

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Tasting FaceScale (0–

Overall liking <sup>15</sup> cm non‐

Purchase intention

analysis (Affectiva app).

analysis (Affectiva app).

**Joy** Joy **Winking face**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Relaxed Upper Lip**

**Stuck out**

**Fear** Fear

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

app.

structured

structured

analysis (Affectiva app).

structured

structured

structured

**Joy** Joy **Winking face**

15 cm non‐ structured

**Fear** Fear

analysis (Affectiva app).

**Stuck out**

analysis (Affectiva app).

**Fear** Fear

Purchase intention

app.

app.

app.

**on**

**Anger** Anger **Scared**

**Smiley Nose**

**Stuck out**

**Fear** Fear

**Fear** Fear

analysis (Affectiva app).

biometric responses.

**Stuck out tongue with winking eye**

**Stuck out**

**3. Results**

**Stuck out**

3.2.3. Multiple Factor Analysis Disappoint ed Confus ed **Table 3.** Emojis used for the check all that apply questions for the sensory testusing the BioSensory **Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app). **er Lip Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video analysis (Affectiva app). Texture <sup>15</sup> cm non‐ Texture <sup>15</sup> cm non‐ structured *Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20 structured **Disgust** Disgust **Rage Lip** *Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20 *Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Happy Savori

Surprised Scared

**Stuck out tongue with winking eye**

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Like extremely Appearance

**Label**

LCD

Like extremely Aroma

Dislike extremely‐Neither like nor dislike‐

Like extremely Texture

**Press** LPr

structured

Like extremely Flavor

100) FS App

Like extremely OL

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

**Open** MO

structured

structured

structured

structured

Like extremely PI

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

**er**

**Lip Corner Depress or**

Like extremely Appearance

100) FS Taste

Dislike extremely‐Neither like nor dislike‐

**er**

**Lip Corner Depress or**

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

**Emoji Meaning Emoji Meani**

**Label**

LCD

SmirkFE

Happy Savori

**Smirk Facial Expressi on**

**Eye**

**Smirk Facial Expressi on**

**Eye**

**ng**

ng

Expressionl

Disappoint ed

Confus ed

ng

**er**

**or**

**on**

Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision

**Tighten** LT

**Drop** JD

**Raise** IBR **Lid**

**Raise** IBR **Lid**

**Wrinkle** NW **Jaw**

**Wrinkle** NW **Jaw**

**Rise** ULR **Pitch**

**Rise** ULR **Pitch**

**Raise** CR

**Tighten** LT

**Drop** JD

Like extremely Appearance

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

Dislike extremely‐Neither like nor dislike‐

**Label**

Like extremely Appearance

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

Like extremely OL

Like extremely PI

LCD

SmirkFE

SmirkFE

**ng**

ng

Confus ed

ng

Like extremely OL

100) FS App

Like extremely PI

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Like extremely Appearance

Dislike extremely‐Neither like nor dislike‐

**Label**

LCD

**ng**

**Open** MO

**Press** LPr

ng

**Press** LPr

Confus ed

**Widen** EW

**Closure** EC

**Open** MO

ng

**Tighten** LT

Surprised Scared

**Closure** EC

**Drop** JD

ess Angry

**Widen** EW

**Raise** CR

**Label**

LCD

**Tighten** LT

**Label**

Neutral Joy

**Drop** JD

SmirkFE

SmirkFE

Confus ed

ng

LCD

Unamused Laughi

**ng**

ng

Like extremely PI

**Raise** CR

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

Like extremely OL

Happy Savori

Appearance <sup>15</sup> cm non‐

Like extremely Appearance

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

**Label**

LCD

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

Appearance FaceScale (0–

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

Like extremely OL

Like extremely PI

**Joy** Joy **Winking face**

structured

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Tasting FaceScale (0–

Dislike extremely‐Neither like nor dislike‐

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Overall liking <sup>15</sup> cm non‐

Purchase intention

Dislike extremely‐Neither like nor dislike‐

**ng**

structured

structured

structured

**Joy** Joy **Winking face**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

100) FS App

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

ng

**Sadness** Sadness **Smirk Lip Suck** LS

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

SmirkFE

app.

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Overall liking <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

structured

structured

Dislike extremely‐Neither like nor dislike‐

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

100) FS App

**er**

**Lip Corner Depress or**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

Like extremely Appearance

Dislike extremely‐Neither like nor dislike‐

Appearance <sup>15</sup> cm non‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Appearance FaceScale (0–

Dislike extremely‐Neither like nor dislike‐

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

100) FS Taste

**Sadness** Sadness **Smirk Lip Suck** LS

**Smirk Facial Expressi on**

Dislike extremely‐Neither like nor dislike‐

Like extremely Aroma

Like extremely Texture

Overall liking <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

structured

structured

**Parameter Label Parameter Label Paramet**

structured

15 cm non‐ structured

**Disgust** Disgust **Rage Lip**

**Parameter Label Parameter Label Paramet**

Like extremely Appearance

**Surprise** Surprise **Disappointed Mouth**

**Disgust** Disgust **Rage Lip**

Like extremely Appearance

Like extremely Texture

Like extremely Aroma

**Smiley Nose**

**Smile** Smile **Inner Brow**

Neutral Joy

Unamused Laughi

**Smiley Nose**

**Relaxed Upper Lip**

Videos were recorded along with the self‐reported answers from participants using the BioSensory app and analyzed using a second app developed with the Affectiva software development kit (SDK; Affectiva, Boston, MA, USA). The latter app can analyze videos in batch to obtain the facial expressions from participants based on the micro‐ and macro‐movements of the different features of the face, using the histogram of the oriented gradient for computer vision

**Relaxed Upper Lip**

**tongue Chin Raise** CR **Yaw**

**tongue Chin Raise** CR **Yaw**

ed

ng

**Stuck out**

**Drop** JD

**Stuck out**

**Tighten** LT

**Raise** IBR **Lid**

structured

Like extremely OL

15 cm non‐ structured

structured

Like extremely Flavor

Like extremely Texture

structured

**Wrinkle** NW **Jaw**

**Press** LPr

structured

structured

**er**

**Lip Corner Depress or**

15 cm non‐ structured

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

**er**

**Lip Corner Depress or**

**Smirk Facial Expressi on**

**Eye**

**Label**

LCD

**Label**

LCD

Dislike extremely‐Neither like nor dislike‐

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

100) FS Taste

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

100) FS Taste

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

structured

Like extremely Appearance

Appearance <sup>15</sup> cm non‐

Appearance FaceScale (0–

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

**Parameter Label Parameter Label Paramet**

Tasting FaceScale (0–

**Disgust** Disgust **Rage Lip**

Dislike extremely‐Neither like nor dislike‐

**Surprise** Surprise **Disappointed Mouth**

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

100) FS App

Overall liking <sup>15</sup> cm non‐

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

structured

analysis (Affectiva app).

structured

structured

structured

structured

15 cm non‐ structured

**Sadness** Sadness **Smirk Lip Suck** LS

**Fear** Fear

Dislike extremely‐Neither like nor dislike‐

**Joy** Joy **Winking face**

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Like extremely Appearance

**Anger** Anger **Scared**

100) FS Taste

Dislike extremely‐Neither like nor dislike‐

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

Like extremely Appearance

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

Dislike extremely‐Neither like nor dislike‐

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

**Sadness** Sadness **Smirk Lip Suck** LS

Dislike extremely‐Neither like nor dislike‐

100) FS Taste

**Press** LPr

Dislike extremely‐Neither like nor dislike‐

Like extremely Appearance

**er**

**Lip Corner Depress or**

**Smirk Facial Expressi on**

**Eye**

**Label**

LCD

SmirkFE

**Press** LPr

**Open** MO

**Closure** EC

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

**Label**

Like extremely OL

LCD

Like extremely PI

**ng**

**Open** MO

ng

SmirkFE

Confus ed

**Wrinkle** NW **Jaw**

**Raise** IBR **Lid**

**Closure** EC

**Widen** EW

**Raise** CR

**Tighten** LT

**Drop** JD

ng

**Rise** ULR **Pitch**

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

Like extremely Aroma

**er**

**Lip Corner Depress or**

Like extremely Texture

**Surprise** Surprise **Disappointed Mouth**

Like extremely Flavor

Like extremely OL

**Stuck out tongue with winking eye**

Like extremely PI

**Valence** Valence **Kissing Cheek**

**Contempt** Contempt **Laughing Eye**

Happy Savori

**Smirk Facial Expressi on**

Surprised Scared

**Eye**

**ng**

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

**Emoji Meaning Emoji Meani**

ng

ed

ng

Like extremely Aroma

structured

Like extremely Appearance

Like extremely Texture

Like extremely Flavor

Like extremely Aroma

Like extremely Texture

Like extremely Flavor

Like extremely OL

structured

**Joy** Joy **Winking face**

structured

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

structured

structured

Purchase intention

Like extremely Appearance

15 cm non‐ structured

Like extremely Aroma

Like extremely Texture

Like extremely PI

Like extremely OL

app.

Like extremely PI

**Anger** Anger **Scared**

**ng**

Tasting FaceScale (0–

ng

SmirkFE

Appearance <sup>15</sup> cm non‐

Appearance FaceScale (0–

analysis (Affectiva app).

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Overall liking <sup>15</sup> cm non‐

Dislike extremely‐Neither like nor dislike‐

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

**Press** LPr

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

**Open** MO

**Closure** EC

**Widen** EW

**Raise** CR

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

**Tighten** LT

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

Appearance FaceScale (0–

Appearance <sup>15</sup> cm non‐

**Emoji Meaning Emoji Meani**

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

**Emoji Meaning Emoji Meani**

**Drop** JD

app.

Happy Savori

Purchase intention

structured

structured

structured

Surprised Scared

**Parameter Label Parameter Label Paramet**

**Disgust** Disgust **Rage Lip**

**Surprise** Surprise **Disappointed Mouth**

Dislike extremely‐Neither like nor dislike‐

100) FS App

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

Dislike extremely‐Neither like nor dislike‐

100) FS Taste

Dislike extremely‐Neither like nor dislike‐

**Stuck out tongue with winking eye**

**Contempt** Contempt **Laughing Eye**

*Foods* **2019** *8*, x FOR PEER REVIEW 5 of 20

*Foods* **2019** *8*, x FOR PEER REVIEW 4 of 20

analysis. Additionally, it uses machine learning algorithms based on support vector machine to translate facial expressions into emotions and emojis; it also assesses head movements (Table 4).

**Table 4.** Facial expressions and emotion‐related parameters obtained from the biometric video

app.

**Table 3.** Emojis used for the check all that apply questions for the sensory test using the BioSensory

app.

**Valence** Valence **Kissing Cheek**

Purchase intention

Purchase intention

**Sadness** Sadness **Smirk Lip Suck** LS

Appearance FaceScale (0–

Appearance <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Appearance FaceScale (0–

Flavor <sup>15</sup> cm non‐

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Tasting FaceScale (0–

Tasting FaceScale (0–

Appearance <sup>15</sup> cm non‐

Appearance FaceScale (0–

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Overall liking <sup>15</sup> cm non‐

Overall liking <sup>15</sup> cm non‐

structured

structured

structured

Like extremely Appearance

structured

structured

structured

structured

Like extremely Aroma

structured

Appearance <sup>15</sup> cm non‐

analysis (Affectiva app).

**Joy** Joy **Winking face**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Smiley Nose**

**Stuck out tongue with winking eye**

**Joy** Joy **Winking face**

analysis (Affectiva app).

Expressionl

Disappoint ed

**Stuck out tongue with winking eye**

**Parameter Label Parameter Label Paramet**

**3. Results**

**Relaxed Upper Lip**

**tongue Chin Raise** CR **Yaw**

**Stuck out**

**3. Results**

**Relaxed Upper Lip**

**Smiley Nose**

**tongue Chin Raise** CR **Yaw**

**Rise** ULR **Pitch**

**tongue Chin Raise** CR **Yaw**

**Stuck out**

analysis (Affectiva app).

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Anger** Anger **Scared**

**Smile** Smile **Inner Brow**

**Fear** Fear

**Smiley Nose**

**Relaxed Upper Lip**

**Stuck out**

**Smiley Nose**

**Fear** Fear

**Relaxed Upper Lip**

**Stuck out**

**Stuck out**

**Fear** Fear

**Joy** Joy **Winking face**

15 cm non‐ structured

analysis (Affectiva app).

Purchase intention

app.

Appearance <sup>15</sup> cm non‐

Appearance FaceScale (0–

Aroma <sup>15</sup> cm non‐

Texture <sup>15</sup> cm non‐

Flavor <sup>15</sup> cm non‐

Tasting FaceScale (0–

Overall liking <sup>15</sup> cm non‐

**Fear** Fear

structured

structured

structured

structured

structured

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses* **Wrinkle** NW **Jaw (a) Figure 4.** *Cont.*

*3.1. Results from the ANOVA of Self‐reported and Biometric Responses*

**Rise** ULR **Pitch**

**Figure 1.** Diagram of the artificial neural network two‐layer feed‐forward models showing the

**Drop** JD

**(b)**

**Figure 4.** Multiple factor analysis for the biometric, frequencies (check all that apply: CATA) and quantitative self‐reported (Liking) responses for (**a**) Asians and (**b**) non‐Asians. Abbreviations: F1 and F2: factors one and two. **Figure 4.** Multiple factor analysis for the biometric, frequencies (check all that apply: CATA) and quantitative self-reported (Liking) responses for (**a**) Asians and (**b**) non-Asians. Abbreviations: F1 and F2: factors one and two.

#### *3.3. Machine Learning Modeling 3.3. Machine Learning Modeling*

In Table 5, it can be observed that Model 1, developed with the results from the Asian participants, had a 92% accuracy to classify the samples into high and low liking using only the emotional responses from biometrics as inputs. Model 2 for non‐Asians had a higher overall accuracy (94%) compared to the Asians, and with higher accuracy also in the training stage (non‐Asians: 76%; Asians: 71%). On the other hand, Model 3, which was developed as a general model with data from all the participants (Asians and non‐Asians), presented an overall accuracy of 89%. The three models had a lower MSE value for the training stage compared to testing, which indicates that there was no overfitting. Figure 5 shows the receiver operating characteristics (ROC) curves, which depict the performance based on the specificity (false positive rate) and sensitivity (true positive rate) of each model and classification group. In Table 5, it can be observed that Model 1, developed with the results from the Asian participants, had a 92% accuracy to classify the samples into high and low liking using only the emotional responses from biometrics as inputs. Model 2 for non-Asians had a higher overall accuracy (94%) compared to the Asians, and with higher accuracy also in the training stage (non-Asians: 76%; Asians: 71%). On the other hand, Model 3, which was developed as a general model with data from all the participants (Asians and non-Asians), presented an overall accuracy of 89%. The three models had a lower MSE value for the training stage compared to testing, which indicates that there was no overfitting. Figure 5 shows the receiver operating characteristics (ROC) curves, which depict the performance based on the specificity (false positive rate) and sensitivity (true positive rate) of each model and classification group.




Overall 438 89% 11% ‐

**Table 5.** *Cont. Foods* **2019** *8*, x FOR PEER REVIEW 15 of 19

**Figure 5.** Receiver operating characteristics (ROC) curves for the three artificial neural network models for (**a**) Model 1: Asians, (**b**) Model 2: non-Asians, and (**c**) Model 3: general (Asians + non-Asians).
