A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

A protocol for capturing and statistically analyzing emotional response of a population to beverages and liquefied foods in a sensory evaluation laboratory using automated facial expression analysis software is described.

Abstract

We demonstrate a method for capturing emotional response to beverages and liquefied foods in a sensory evaluation laboratory using automated facial expression analysis (AFEA) software. Additionally, we demonstrate a method for extracting relevant emotional data output and plotting the emotional response of a population over a specified time frame. By time pairing each participant's treatment response to a control stimulus (baseline), the overall emotional response over time and across multiple participants can be quantified. AFEA is a prospective analytical tool for assessing unbiased response to food and beverages. At present, most research has mainly focused on beverages. Methodologies and analyses have not yet been standardized for the application of AFEA to beverages and foods; however, a consistent standard methodology is needed. Optimizing video capture procedures and resulting video quality aids in a successful collection of emotional response to foods. Furthermore, the methodology of data analysis is novel for extracting the pertinent data relevant to the emotional response. The combinations of video capture optimization and data analysis will aid in standardizing the protocol for automated facial expression analysis and interpretation of emotional response data.

Introduction

Automated facial expression analysis (AFEA) is a prospective analytical tool for characterizing emotional responses to beverages and foods. Emotional analysis can add an extra dimension to existing sensory science methodologies, food evaluation practices, and hedonic scale ratings typically used both in research and industry settings. Emotional analysis could provide an additional metric that reveals a more accurate response to foods and beverages. Hedonic scoring may include participant bias due to failure to record reactions1.

AFEA research has been used in many research applications including computer gaming, user behavior, education/pedagogy, and psychology studies on empathy and deceit. Most food-associated research has focused on characterizing emotional response to food quality and human behavior with food. With the recent trend in gaining insights into food behaviors, a growing body of literature reports use of AFEA for characterizing the human emotional response associated with foods, beverages, and odorants1-12.

AFEA is derived from the Facial Action Coding System (FACS). The facial action coding system (FACS) discriminates facial movements characterized by action units (AUs) on a 5-point intensity scale13. The FACS approach requires trained review experts, manual coding, significant evaluation time, and provides limited data analysis options. AFEA was developed as a rapid evaluation method to determine emotions. AFEA software relies on facial muscular movement, facial databases, and algorithms to characterize the emotional response14-18. The AFEA software used in this study reached a "FACS index of agreement of 0.67 on average on both the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP) and Amsterdam Dynamic Facial Expression Set (ADFES), which is close to a standard agreement of 0.70 for manual coding"19. Universal emotions included in the analysis are happy (positive), sad (negative), disgusted (negative), surprised (positive or negative), angry (negative), scared (negative) and neutral each on a separate scale of 0 to 1 (0=not expressed; 1=fully expressed)20. In addition, psychology literature includes happy, surprised, and angry as "approach" emotions (toward stimuli) and sad, scared, and disgusted as "withdrawal" emotions (away from aversive stimuli)21.

One limitation of the current AFEA software for characterizing emotions associated with foods is interference from facial movements associated with chewing and swallowing as well as other gross motor motions, such as extreme head movements. The software targets smaller facial muscular motions, relating position and degree of movement, based on over 500 muscle points on the face16,17. Chewing motions interfere with classification of expressions. This limitation may be addressed using liquefied foods. However, other methodology challenges can also decrease video sensitivity and AFEA analysis including data collection environment, technology, researcher instructions, participant behavior, and participant attributes.

A standard methodology has not been developed and verified for optimal video capture and data analysis using AFEA for emotional response to foods and beverages in a sensory evaluation laboratory setting. Many aspects can affect the video capture environment including lighting, shadowing due to lighting, participant directions, participant behavior, participant height, as well as, camera height, camera angling, and equipment settings. Moreover, data analysis methodologies are inconsistent and lack a standard methodology for assessing emotional response. Here, we will demonstrate our standard operating procedure for capturing emotional data and processing data into meaningful results using beverages (flavored milk, unflavored milk and unflavored water) for evaluation. To our knowledge only one peer reviewed publication, from our lab group, has utilized time series for data interpretation for emotions analysis8; however, the method has been updated for our presented method. Our aim is to develop an improved and consistent methodology to help with reproducibility in a sensory evaluation laboratory setting. For demonstration, the objective of the study model is to evaluate if AFEA could supplement traditional hedonic acceptability assessment of flavored milk, unflavored milk and unflavored water. The intention of this video protocol is to help establish AFEA methodology, standardize video capture criteria in a sensory evaluation laboratory (sensory booth setting), and illustrate a method for temporal emotional data analysis of a population.

Protocol

Ethics Statement: This study was pre-approved by Virginia Tech Institutional Review Board (IRB) (IRB 14-229) prior to starting the project.

Caution: Human subject research requires informed consent prior to participation. In addition to IRB approval, consent for use of still or video images is also required prior to releasing any images for print, video, or graphic imaging. Additionally, food allergens are disclosed prior to testing. Participants are asked prior to panel start if they have any intolerance, allergies or other concerns.

Note: Exclusion Criteria: Automated facial expression analysis is sensitive to thick framed glasses, heavily bearded faces and skin tone. Participants who have these criteria are incompatible with software analysis due to an increased risk of failed videos. This is attributed to the software's inability to find the face.

1. Sample Preparation and Participant Recruitment

  1. Prepare beverage or soft food samples.
    1. Prepare intensified dairy solutions using 2% milk and suggested flavors from Costello and Clark (2009)22 as well as other flavors. Prepare the following solutions: (1) unflavored milk (2% reduced fat milk); (2) unflavored water (drinking water); (3) vanilla extract flavor in milk (0.02 g/ml) (imitation clear vanilla flavor); and (4) salty flavor in milk (0.004 g/ml iodized salt).
      Note: These solutions are used for demonstration purposes only.
    2. Pour half ounce aliquots (~15 g) of each solution into 2 oz. transparent plastic sample cups and cap with color coded lids.
      Note: It is recommended to use transparent cups; however, it is up to the researcher's discretion.
  2. Recruit participants from the campus or the local community to participate in the study.
    Note: Participant sample size needed for a study is up to the discretion of the researcher. We recommend a range of 10 to 50 participants.
  3. Obtain human subject consent prior to participation in the study.

2. Preparation of Panel Room for Video Capture

Note: This protocol is for data capture in a sensory evaluation laboratory. This protocol is to make AFEA data capture useful for a sensory booth setting.

  1. Use individual booths with a touchscreen monitor in front of them (face level) to keep their focus forward and to prevent looking down.
  2. Use adjustable height chairs with back support.
    Note: These are essential for allowing participants to be vertically adjusted and placed in a suitable range for video capture. Use stationary chairs (no rolling feature) with adjustable back height support so the participant's movements are reduced.
  3. Set overhead lighting at "100% daylight" for optimal facial emotional video capture (Illuminant 6504K; R=206; G=242; B=255).
    Note: To avoid intense shadowing, diffuse frontal lighting is ideal while the light intensity or color is not as relevant20. Ultimately, it is up to the discretion of the researcher, individual protocol/methodology, and environment to control lighting for capture.
  4. Affix an adjustable camera above the touchscreen monitor for recording.
    1. Use a camera with a resolution of at least 640 x 480 pixels (or higher)20. Discuss the required camera capabilities with the software provider before purchase and installation20. Note: The aspect ratio is not important20.
    2. Set camera capture speed to 30 frames per second (or other standard speed) for consistency.
    3. Connect and ensure media recording software is set up to the camera to record and save participant videos.

3. Participant Adjustment and Verbal Directions

  1. Have only one participant at a time evaluate the samples in the sensory booth.
    Note: Testing more than one participant at the same time may interfere with the testing environment and disrupt the concentration of the participant or create bias.
  2. Upon arrival, give participants verbal instructions about the process and standard operating procedures.
    1. Have the participants sit straight up and against the back of the chair.
    2. Adjust chair height, position of the chair (distance from the camera), and camera angle so that the participant's face is captured in the center of the video recording, with no shadows on chin or around eyes.
      Note: In the sensory booth, the participant's head is roughly 20 - 24 inches away from the camera and the monitor with the face centered in the camera video feed.
    3. Instruct participants to remain seated as positioned and focused facing towards the monitor display. Additionally, instruct participants to refrain from any sudden movements post-sample consumption during the 30 sec evaluation period per sample.
    4. Instruct the participant to consume the entire beverage or liquefied food sample and swallow.
    5. Instruct the participant to quickly move the sample cup below the chin and down to the table immediately after the sample is in the mouth. This is to eliminate facial occlusion. Remind them to keep looking toward the monitor.
      Note: The sample carrier to deliver the sample is up to the discretion of the researcher. A straw or cup may be used. Regardless, initial facial occlusion is unavoidable because the face will be occluded or distorted due to consumption.
  3. Instruct the participant to follow the instructions as they appear on the touchscreen monitor. Note: Instructions are automatically sequenced as programmed into the automated sensory software.

4. Individual Participant Process for Video Capture

  1. Confirm video camera is optimally capturing participant’s face while the participant is seated comfortably in the booth (before sample presentation) by viewing the computer monitor on which the video capture is displayed. Begin recording by clicking the record button on the computer monitor.
  2. Instruct participants to sip water to cleanse their palate.
  3. Provide treatments one at a time, starting with a baseline or control treatment (unflavored water). Identify each sample by a unique colored index card placed on top of each sample relating to the sample color code for sample treatment identification within the video.
    Note: Programmed guidance on the touchscreen monitor instructs participants. The instructions direct the participant through a series of standardized steps for each treatment sample.
  4. Via the touchscreen monitor, direct the participant to:
    1. Hold up the associated color index card pre-consumption for sample identification in the video.
      Note: The color card is a way researchers can identify treatments in the video and mark the appropriate time frame (time zero) for sample evaluation.
    2. After holding the card briefly, place the card back on the tray.
    3. Fully consume the sample and wait approximately 30 seconds, enforced through the programmed guidance on the monitor, while facing towards the camera.
      Note: The 30 sec controlled sampling period encompasses a time span adequate for the entire sampling evaluation period (i.e., showing the index card, opening a sample (removing the lid), consumption, and emotional capture).
    4. Enter their hedonic acceptability score on the touchscreen monitor (1=dislike extremely, 2=dislike very much, 3=dislike moderately, 4=dislike slightly, 5=neither like nor dislike, 6=like slightly, 7=like moderately, 8=like very much, 9=like extremely).
    5. Rinse mouth with drinking water before the next sample process.

5. Evaluating Automated Facial Expression Analysis Options

Note: Many facial expression analysis software programs exist. Software commands and functions may vary. It is important to follow the manufacturer's user guidelines and reference manual20.

  1. Save recordings in a media format and transfer to the automated facial expression analysis software.
  2. Analyze participant videos using automated facial analysis software.
    1. Double click on the software icon on the computer desktop.
    2. Once the program is open, click "File", select "New…", and select "Project…"
    3. In the pop up window, name the project and save the project.
    4. Add participants to the project by clicking the "Add participants" icon (Person with a (+) sign). More participants can be added by repeating this step.
    5. Add participant's video to the respective participant for analysis.
      1. On the left side of the screen click the icon of the film reel with a plus (+) sign to add a video to analyze.
      2. Click the "magnifying glass" under the participant of interest to browse the video to add.
  3. Analyze videos frame-by-frame under continuous calibration analysis settings in the software.
    1. Click the pencil icon to adjust settings at the bottom of the window, under the "settings" tab for each participant video.
      1. Set "Face Model" to General. Set "Smoothen classifications" to Yes. Set "Sample Rate" to Every frame.
      2. Set "Image rotation" to No. Set "Continuous calibration" to Yes. Set "Selected calibration" to None.
    2. Save project settings.
    3. Press the batch analysis icon (the same red and black target-like symbol) to analyze the project videos.
    4. Save the results once analysis is completed.
      Note: Other video settings exist in the software if researcher preference warrants another analysis method.
    5. Consider videos failures if serious facial occlusions or the inability to map the face persists during the specified post-consumption window (Figure 1). Additionally, if the model fails data will say "FIT_FAILED" or "FIND_FAILED" in the exported output files (Figure 2). This represents lost data since the software cannot classify or analyze the participant's emotions.
      Note: AFEA translates facial muscle motion to neutral, happy, disgusted, sad, angry, surprised and scared on a scale from 0 (not expressed) to 1 (fully expressed) for each emotion.
  4. Export the AFEA data output as log files (.txt) for further analysis.
    1. Once analyses are complete, export the whole project.
      1. Click "File", "Export", "Export Project Results".
      2. When a window opens, choose the location of where the exports should be saved and save the log files (.txt) to a folder.
      3. Convert each participant log life to a data spreadsheet (.csv or .xlsx) to extract relevant data.
        1. Open data spreadsheet software and select the "Data" tab.
        2. On the "Data" tab, in the "Get External Data" group, click "From Text".
        3. In the "Address bar", locate, double-click the participant text file to import, and follow the on screen wizard instructions.
        4. Continue the export process for all relevant participant files.

6. Timestamp Participant Videos for Data Analysis

  1. Using the AFEA software, manually review each participant’s video and identify post-consumption time zero for each sample. Record the timestamp in a data spreadsheet. Post-consumption is defined when the sample cup is below the participant’s chin and no longer occludes the face.
    Note: The placement of the timestamp is critical for evaluation. The point where the cup no longer occludes the face is the optimal recommendation and timestamps need to be consistent for all participants.
  2. Save the timestamp data spreadsheet (.csv) as a reference for extracting relevant data from videos.
    Note: Participant videos may also be coded internally in the software as "Event Marking".

7. Time Series Emotional Analysis

Note: Consider the "baseline" to be the control (i.e., unflavored water in this example). The researcher has the ability to create a different "baseline treatment stimulus" or a "baseline time without stimulus" for paired comparison dependent on the interests of the investigation. The method proposed accounts for a "default" state by using a paired statistical test. In other words, the procedure uses statistical blocking (i.e., a paired test) to adjust for the default appearance of each participant and therefore reduces the variability across participants.

  1. Extract relevant data from the exported files (.csv or .xlsx).
    1. Identify a time frame relevant to the study evaluation (seconds).
    2. Manually extract respective data (time frame) from the exported participant files consulting the participant timestamp (time zero).
    3. Compile each participant's treatment data (participant number, treatment, original video time, and emotion response) per emotion (happy, neutral, sad, angry, surprised, scared, and disgusted) for the select time frame (seconds) in a new data spreadsheet for future analysis (Figure 3).
    4. Continue this process for all participants.
  2. Identify the corresponding time zero from the timestamp file for each participant-treatment pair and adjust video time to a true time "0" for direct comparison (Figure 4, Figure 5).
    Note: Participant data is collected in a continuous video therefore each treatment "time zero" is different (i.e., unflavored water video time zero is 02:13.5 and unflavored milk video time zero is 03:15.4) in Figure 4. Due to the different treatment "time zeroes", the video times need to be readjusted and realigned to start at "0:00.0" or other standard start time in order for direct time comparison of treatment emotional response data.
  3. For each participant, emotion, and adjusted time point, extract the paired treatment (e.g., unflavored milk) and control treatment (e.g., unflavored water) quantitative emotional score. In other words, align a participant's treatment and control time series of responses for each emotion (Figure 5).
  4. Compile all participant's information (participant, adjusted time, and paired treatment (e.g., unflavored water and unflavored milk) at each time point (Figure 6).
    Note: The steps below demonstrate the steps for a paired Wilcox test by hand. Most data analysis software programs will do this automatically. It is recommended to discuss the statistical analysis process with a statistician.
  5. Once the samples are reset and aligned with new adjusted video times, directly compare between the emotional results of a respective sample and the control (unflavored water) using sequential paired nonparametric Wilcoxon tests across the participants (Figure 7).
    Note: The new time alignment of the samples will allow for direct comparison within the 5 seconds post-consumption time frame. If a paired observation is not present in a treatment, drop the participant from that time point comparison.
    1. Calculate the difference between the control and the respective sample for each paired comparison using data spreadsheet management software.
      Note: The comparison will be dependent on the frame rate selected for emotional analysis in the software. The protocol demonstrates 30 individual comparisons per second for 5 seconds (selected time frame).
      Note: Use Figure 7 as a reference for columns and steps.
      1. Subtract the value of milk (e.g., unflavored milk) from the value of the control (e.g., unflavored water) to determine the difference. In the data spreadsheet management software in a new column titled "Treatment Difference", enter "=(C2)-(D2)", where "C2" is the control emotional values and "D2" is the selected treatment emotional values. Continue this process for all time points.
      2. Calculate the absolute value of the treatment difference. In the data spreadsheet management software in a new column, enter "=ABS(E2)", where "E2" is the Treatment Difference. Continue this process for all time points.
      3. Determine the rank order of the treatment difference. In the data spreadsheet management software in a new column, enter "=RANK(G2, $G$2:$G$25, 1)" where "G2" is the Absolute Difference and "1" is "ascending". Continue this process for all time points.
      4. Determine the signed rank of the rank order on the spreadsheet. Change the sign to negative if the treatment difference was negative (Column I).
      5. Calculate the positive sum (=SUMIF(I2:I25, ">0", I2:I25) and negative sum =SUMIF(I2:I25,"<0",I2:I25) of the rank values.
      6. Determine the test statistic. The test statistic is the absolute value lower sum.
      7. Consult statistical tables for Wilcoxon Signed Ranked Test Statistic using the number of observations included at the specific time and a selected alpha value to determine the critical value.
      8. If the test statistic is less than the critical value reject the null hypothesis. If it is greater, accept the null hypothesis.
  6. Graph the results on the associated treatment graph (i.e., unflavored milk compared to unflavored water) for the times when the null hypothesis is rejected. Use the sign of the difference to determine which treatment has the greater emotion (Figure 8).
    1. In the data spreadsheet management software, create a graph using the values of presence or absence of significance.
      1. Click "Insert" tab.
      2. Select "Line"
      3. Right click on the graph box.
      4. Click "select data" and follow the screen prompts to select and graph relevant data (Figure 8).
        Note: The graphs will portray emotional results where the sample or control is higher and significant. Graph dependent, the emotion is higher at that specific time allowing the ability to discern how participant's emotions evolve over the 5 second time period between two samples.
        Note: Statistical support with a statistician is highly recommended to extract relevant data. Development of statistical coding is required to analyze emotional results.

Results

The method proposes a standard protocol for AFEA data collection. If suggested protocol steps are followed, unusable emotional data output (Figure 1) resulting from poor data collection (Figure 2: A; Left Picture) may be limited. Time series analysis cannot be utilized if log files (.txt) predominantly contain "FIT_FAILED" and "FIND_FAILED" as this is bad data (Figure 1). Furthermore, the method includes a protocol for dir...

Discussion

AFEA application in literature related to food and beverage is very limited1-11. The application to food is new, creating an opportunity for establishing methodology and data interpretation. Arnade (2013)7 found high individual variability among individual emotional response to chocolate milk and white milk using area under the curve analysis and analysis of variance. However, even with participant variability, participants generated a happy response longer while sad and disgusted had shorter time r...

Disclosures

The authors have nothing to disclose.

Acknowledgements

This project was funded, in part, by ConAgra Foods (Omaha, NE, USA), the Virginia Agricultural Experiment Station, the Hatch Program of the National Institute of Food and Agriculture, U.S. Department of Agriculture, and the Virginia Tech Water INTERface Interdisciplinary Graduate Education Program.

Materials

NameCompanyCatalog NumberComments
2% Reduced Fat MilkKroger Brand, Cincinnati, OH or DZA Brands, LLC, Salisbury, NCnafor solutions
Drinking WaterKroger Brand, Cincinnati, OHnafor solutions
Imitation Clear Vanilla FlavorKroger Brand, Cincinnati, OHnafor solutions
Iodized SaltKroger Brand, Cincinnati, OHnafor solutions
FaceReader 6Noldus Information Technology, Wageningen, The NetherlandsnaFor Facial Analysis
Sensory Information Management System (SIMS) 2000Sensory Computer Systems, Berkeley Heights, NJVersion 6For Sensory Data Capture
RhapsodyAcuity Brands Lighting, Inc., Conyers, GAFor Environment Illumination
R Version R Core Team 20153.1.1For Statistical Analysis
Microsoft OfficeMicrosoftnaFor Statistical Analysis
JMPStatistical Analysis Software (SAS) Version 9.2, SAS Institute, Cary, NCnaFor Statistical Analysis
Media Recorder 2.5Noldus Information Technology, Wageningen, The NetherlandsnaFor capturing participants sensory evaluation
Axis M1054 CameraAxis Communications, Lund, Swedenna
BeveragenaBeverage or soft food for evaluation

References

  1. De Wijk, R. A., Kooijman, V., Verhoeven, R. H. G., Holthuysen, N. T. E., De Graaf, C. Autonomic nervous system responses on and facial expressions to the sight, smell, and taste of liked and disliked foods. Food Qual Prefer. 26 (2), 196-203 (2012).
  2. De Wijk, R. A., He, W., Mensink, M. G. J., Verhoeven, R. H. G., De Graaf, C. ANS responses and facial expression differentiate between the taste of commercial breakfast drinks. PLoS ONE. 9 (4), 1-9 (2014).
  3. He, W., Boesveldt, S., De Graaf, C., De Wijk, R. A. Behavioural and physiological responses to two food odours. Appetite. 59 (2), 628 (2012).
  4. He, W., Boesveldt, S., De Graaf, C., De Wijk, R. A. Dynamics of autonomic nervous system responses and facial expressions to odors. Front Psychol. 5 (110), 1-8 (2014).
  5. Danner, L., Sidorkina, L., Joechl, M., Duerrschmid, K. Make a face! Implicit and explicit measurement of facial expressions elicited by orange juices using face reading technology. Food Qual Prefer. 32 (2014), 167-172 (2013).
  6. Danner, L., Haindl, S., Joechl, M., Duerrschmid, K. Facial expression and autonomous nervous system responses elicited by tasting different juices. Food Res Int. 64 (2014), 81-90 (2014).
  7. Arnade, E. A. . Measuring consumer emotional response to tastes and foods through facial expression analysis [thesis]. , 1-187 (2013).
  8. Leitch, K. A., Duncan, S. E., O'Keefe, S., Rudd, R., Gallagher, D. L. Characterizing consumer emotional response to sweeteners using an emotion terminology questionnaire and facial expression analysis. Food Res Int. 76, 283-292 (2015).
  9. Crist, C. A., et al. Application of emotional facial analysis technology to consumer acceptability using a basic tastes model. , (2014).
  10. Garcia-Burgos, D., Zamora, M. C. Facial affective reactions to bitter-tasting foods and body mass index in adults. Appetite. 71 (2013), 178-186 (2013).
  11. Garcia-Burgos, D., Zamora, M. C. Exploring the hedonic and incentive properties in preferences for bitter foods via self-reports, facial expressions and instrumental behaviours. Food Qual Prefer. 39 (2015), 73-81 (2015).
  12. Lewinski, P., Fransen, M. L., Tan, E. S. H. Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli. J. Neurosci. Psychol. Econ. 7 (1), 1-14 (2014).
  13. Ekman, P., Friesen, W. V. Facial action coding system: A technique for the measurement of facial movement. , (1978).
  14. Viola, P., Jones, M. Rapid object detection using a boosted cascade of simple features. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern. 1, 511-518 (2001).
  15. Sung, K. K., Poggio, T. Example-based learning for view-based human face detection. IEEE Trans. Pattern Anal. Mach. Intell. 20 (1), 39-51 (1998).
  16. . . FaceReader 5™ Technical Specifications. , (2014).
  17. Cootes, T., Taylor, C. . Statistical models of appearance for computer vision: Technical report. , (2000).
  18. Bishop, C. M. . Neural networks for pattern recognition. , (1995).
  19. Lewinski, P., den Uyl, T. M., Butler, C. Automated facial coding: validation of basic emotions and FACS AUs in FaceReader. J. Neurosci. Psychol. Econ. 7 (4), 227-236 (2014).
  20. Noldus Information Technology. . FaceReader Reference Manual Version 6. , (2014).
  21. Alves, N. T., Fukusima, S. S., Aznar-Casanova, J. A. Models of brain asymmetry in emotional processing. Psychol Neurosci. 1 (1), 63-66 (2008).
  22. Costello, M., Clark, S., Clark, S., Costello, M., Drake, M., Bodyfelt, F. Preparation of samples for instructing students and staff in dairy products evaluation (Appendix F). The sensory evaluation of dairy foods. , 551-560 (2009).
  23. Porcherot, C., et al. How do you feel when you smell this? Optimization of a verbal measurement of odor-elicited emotions. Food Qual Prefer. 21, 938-947 (2010).
  24. Warrenburg, S. Effects of fragrance on emotions: Moods and physiology. Chem. Senses. 30, i248-i249 (2005).
  25. Bredie, W. L. P., Tan, H. S. G., Wendin, K. A comparative study on facially expressed emotions in response to basic tastes. Chem. Percept. 7 (1), 1-9 (2014).
  26. Wendin, K., Allesen-Holm, B. H., Bredie, L. P. Do facial reactions add new dimensions to measuring sensory responses to basic tastes?. Food Qual Prefer. 22, 346-354 (2011).
  27. Rosenstein, D., Oster, H. Differential facial responses to four basic tastes in newborns. Child Dev. 59 (6), 1555-1568 (1988).
  28. Rosenstein, D., Oster, H., P, E. k. m. a. n., E, R. o. s. e. n. b. e. r. g. Differential facial responses to four basic tastes in newborns. What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system (FACS). , 302-327 (1997).
  29. Rozin, P., Fallon, A. E. A perspective on disgust. Psychol. Rev. 94 (1), 23-41 (1987).
  30. Delarue, J., Blumenthal, D. Temporal aspects of consumer preferences. Curr. Opin. Food Sci. 3, 41-46 (2015).
  31. Sudre, J., Pineau, N., Loret, C., Marin, N. Comparison of methods to monitor liking of food during consumption. Food Qual Prefer. 24 (1), 179-189 (2012).
  32. Weiland, R., Ellgring, H., Macht, M. Gustofacial and olfactofacial responses in human adults. Chem. Senses. 35 (9), 841-853 (2010).
  33. Ekman, P., Cole, J. Universal and cultural differences in facial expressions of emotion. Nebraska symposium on motivation. , 207-283 (1971).
  34. Griemel, E., Macht, M., Krumhuber, E., Ellgring, H. Facial and affective reactions to tastes and their modulation by sadness and joy. Physiol Behav. 89 (2), 261-269 (2006).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Automated Facial Expression AnalysisSensory EvaluationEmotional ResponseData CollectionTemporal AnalysisFacial ExpressionImplicit Emotional ResponseStandardized MethodologyBeverageLiquefied Food

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved