*2.5. Facial Expressions*

Video segments of the consumption of each bite were stored together with the participant's code, the product code, and time and date information. Facial expression data were automatically analyzed per time frame of 0.04 s by FaceReader 8.0 (Noldus Information Technology, Wageningen, The Netherlands) in three steps. The face is detected in the first step using the Viola-Jones algorithm [34]. Next, the face is accurately modelled using an algorithmic approach [35]. Based on the Active Appearance method described by Cootes and Taylor [36], the model is trained with a database of annotated images that describes over 500 key points in the face and the facial texture of the face. Finally, the actual classification of the facial expressions is based on an artificial neural network trained with 10,000 manually annotated images. The face classification provides the output of seven basic expressions (happy, sad, angry, surprised, scared, disgusted, and contemptuous), one neutral state on the basis of the Facial Action Coding System developed by Ekman and Friesen [37], three "affective attitudes" (interest, boredom, and confusion), and arousal and valence dimensions based on combinations of facial expressions. Valence scores were calculated per time frame from the FaceReader happiness score minus the most intense FaceReader negative emotion score (sad, angry, surprised, scared, disgusted, and contemptuous). FaceReader scores for each emotional expression, except arousal, range from 0 (emotion is not detected) to 1 (maximal detection) and are based on intensity judgments of human experts. Arousal scores range from −1 to 1. FaceReader allows for the simultaneous presence of multiple emotions. Only the arousal and valence dimensions were used for this study.

FaceReader was validated by others using the Radboud Faces Database, a standardized test with images of expressions associated with basic emotions. The test persons in the images were trained to pose a particular emotion and the images were labelled accordingly by the researchers. Subsequently, the images were analyzed in FaceReader. Accuracy of the assessment of the emotions by FaceReader varied between 84.4% for scared and 95.9% for happy, with an average of 90% [38]. Other validation studies showed superior performance of FaceReader for neutral faces (90% correct recognition for FaceReader versus 59% for humans) [39].

A more detailed description of the science behind FaceReader can be found at http:// info.noldus.com/free-white-paper-on-facereader-methodology/ (accessed on 18 May 2021). Whether all possible emotional expressions can be categorized by these six emotions, affective attitudes, and arousal/valence dimensions remains a matter of debate (e.g., [40]).
