New Developments in Emotion Research

New Developments in Emotion Research

Emotion recognition

By Isabelle Mareschal

Date and time

Fri, 20 Jan 2023 10:00 - 19:00 GMT

Location

Queen Mary University of London

Mile End Road, Arts Theater 2 London E1 4NS United Kingdom

About this event

This symposium is funded by the MRC and is an opportunity for researchers in different areas to present work on emotion research, looking at new developments / methods or questions.

Confirmed speakers on the day are:

Nicola Binetti, SISSA. EmoGen: a window into people’s understanding of emotional expressions through a response driven expression-rendering pipeline

Roberto Caldara, University of Fribourg. Myths and limits of the recognition of facial expressions of emotion

Antoine Coutrot, University of Lyon. Facial emotion recognition tests should be country-dependent.

Maria Gendron, Yale University. Emotional Event Perception is Related to Lexical Complexity and Emotion Knowledge

Connor Keating, University of Birmingham. It's written all over your face: new insights into why some people are better at reading emotions than others.

Tom Murray, Queen Mary University. Genetic Algorithms Reveal Identity Independent Representation of Emotional Expressions

Lauri Nummenmaa, University of Turku. Mapping the whole-body emotion circuits with total-body positron emission tomography.

Bridget Waller, Nottingham Trent University. Individual differences in facial expression production: Implications for theories of emotion

Symposium is held in Arts 2 Queen Mary University of London

PROGRAM:

10:00 – 10:30 : Registration + Tea / Coffee

10:30 – 12:30 : Talk session 1

12:30 – 1:30 : Lunch

1:30 – 2:50 : Talk session 2

2:50 – 3:10 : Tea / Coffee

3:10 – 5:10 : Talk session 3

5:10 – 7pm : Reception

TALK SESSION 1

10:30-11:10 : Lauri Nummenmaa

Mapping the whole-body emotion circuits with total-body positron emission tomography

Emotions are intimately linked with the homeostatic control of the body. Although modern neuroimaging techniques allow quantification of neuronal changes associated with emotion, cognition, and numerous other processes, these are typically investigated independently of the physiological state of the body. Since the advent of functional magnetic resonance imaging, truly quantitative perfusion-based PET studies of human brain activity fell out of favour due to the complexity, low temporal precision and associated radiation load. The major advantage of PET however is that the acquisition of the projection data is not limited by physical (e.g., gantry rotation in CT) or electronic (e.g., pulse sequences in MRI) scanning. Instead, the limits arise from the counting statistics of coincident photon detection, as in conventional PET scanners a large proportion of the photons do not pass through the limited-field detector ring. First, roughly 85%–90% of the body always is outside the FOV of the scanner thus yielding no signal. Second, even for tissues and organs within the FOV, less than 5% of the gamma rays can be collected, as the radiation is emitted isotopically, and it does not intercept the detector rings.

Total-body PET resolves both these problems by extending the detector rings so that they cover practically the whole body. The axial field of view of the Siemens Quadra PET camera is 106 cm which is about four times longer than the axial field of view in the current PET/CT scanners. It covers the whole-body from head to approximately mid-thigh in a single scan in subjects/patients and an estimated 24-fold increase in head to-thigh scan and 4-fold increased for single tissue/organ studies. With the advent of total-body PET the quantitative functional imaging of the brain and the body are again in the prime. With the modern total-body PET scanners yielding ultrasensitive sub-second total-body imaging we can investigate the neurobiology and peripheral physiology of human mental functions simultaneously.

In this talk I present and overview of the recent methodological developments in total-body PET in the context of affective neuroscience. I will also present results from our recent studies pharmacological imaging of the anxiety response in central nervous system and peripheral organs and discuss how the total-body imaging will lead to a paradigm shift in our understanding of the biological circuits underlying human mental functions.

11:10 – 11:50 Nicola Binetti

EmoGen: a window into people’s understanding of emotional expressions through a response driven expression-rendering pipeline

Successful emotional communication between group members hinges on a mutual understanding of how facial expressions map onto distinct emotion categories. Yet, recent evidence shows that not all people process emotional expressions in the same way, revealing that this mapping is not universal. But how much do people deviate in their ideas of emotional expressions, and when do these differences hinder their ability of successfully communicating emotional states? We developed EmoGen as a means of quantifying this variability within a typical adult population, allowing people to evolve synthetic expressions portrayed by 3D faces that embody core emotion types. This method revealed substantial variability in expressions within each emotion, suggesting that people associate different facial expressions to the same emotional state. This variability in turn has measurable impacts on emotion recognition, with performance explained by how closely viewed expressions match one's own internal template of a specific emotion category. This can lead people to misread emotions, when for instance one person’s idea of fear is closer to another’s idea of anger, potentially leading to a breakdown in emotional communication. This method has important implications for emotion research, as it provides a quick, response-driven method for probing people’s unique ideas of how emotions should be expressed, and relate these ideas to their interpretation of other’s expressions.

11:50 – 12:30 Maria Gendron

Emotional Event Perception is Related to Lexical Complexity and Emotion Knowledge

Emotion inferences are often studied by asking people to categorize isolated and static cues like frowning faces. Yet emotions are complex events that unfold over time. Here, across three samples (total N= 584), we develop the Emotion Segmentation Paradigm to examine inferences of complex emotional events by extending the cognitive science work on event perception. Participants were asked to indicate when there were changes in the emotions of target individuals within continuous streams of activity in narrative film (Study 1) and documentary clips (Study 2, pre-registered, and Study 3 test-retest sample). This Emotion Segmentation Paradigm revealed robust and reliable individual differences. We also tested the prediction that emotion labels constrain emotion inference, which has previously only been tested by introducing emotion labels. We demonstrated that individual differences in active emotion vocabulary (i.e., readily accessible emotion words) shape emotion segmentation performance. Our Emotion Segmentation Paradigm has the potential to advance how we operationalize and study sources of variation in emotion inference.

TALK SESSION 2

1:30– 2:10 Tom Murray

Genetic Algorithms Reveal Identity Independent Representation of Emotional Expressions

People readily and automatically process facial emotion and identity, and it has been reported that these cues are processed both dependently and independently. However, this question of identity independent encoding of emotions has only been examined using posed, often exaggerated expressions of emotion, that do not account for the substantial individual differences in emotion recognition. In this study, we ask whether people’s unique beliefs of how emotions should be reflected in facial expressions depend on the identity of the face. To do this, we employed a genetic algorithm where participants created facial expressions to represent different emotions. Participants generated facial expressions of anger, fear, happiness, and sadness, on two different identities. Facial features were controlled by manipulating a set of weights, allowing us to probe the exact positions of faces in high-dimensional expression space. We found that participants created facial expressions belonging to each identity in a similar space that was unique to the participant, for angry, fearful, and happy expressions, but not sad. However, using a machine learning algorithm that examined the positions of faces in expression space, we also found systematic differences between the two identities’ expressions across participants. This suggests that participants’ beliefs of how an emotion should be reflected in a facial expression are unique to them and identity independent, although there are also some systematic differences in the facial expressions between two identities that are common across all individuals.

2:10– 2:50 Connor Keating

It's written all over your face: new insights into why some people are better at reading emotions than others.

Some people are exceptional at reading the emotions of other people, while others struggle. At present, little is known about which emotional abilities and processes feed into these individual differences in emotion recognition. Here we discuss novel insights garnered from recent studies on the traits, abilities, and processes implicated in the recognition of emotion from dynamic facial expressions. For example, we consider whether the way we feel “on the inside” influences the way we expect emotions to be expressed in the “outside world”, and subsequently our ability to read others’ emotional expressions. In doing so, we introduce novel methods for mapping the internal emotional landscape (EmoMap), visual representations of emotion (ExpressionMap), and dynamic productions of facial expressions (FacialMap). Combining the results from multiple studies, we construct the Inside Out Model of Emotion Recognition, which explains 61.2% of the variance in emotion recognition accuracy, and demonstrates that the precision and differentiation of emotional experiences and visual emotion representations contribute to emotion recognition. The results of this work have critical implications for answering outstanding questions regarding the aetiology of emotion recognition difficulties in numerous clinical populations (e.g., autism spectrum disorder, depression, anxiety, etc.).

TALK SESSION 3

3:10– 3:50 Bridget M. Waller

Individual differences in facial expression production: Implications for theories of emotion

Communicating with others via the face is crucial for navigating social interactions. We know surprisingly little about how individuals differ in this ability and whether such differences impact on individual lives. In part, this could be due to an historical focus on the universal nature of facial expression, assigning individual difference to random ‘noise’. There is also limited data from real-life social interactions to test hypotheses. We are exploring whether individual differences in facial expressivity equip individuals’ differentially to engage with their social environment, resulting in differences in size and quality of an individual’s social network. We combine psychological, anatomical and cross-species methods in an interdisciplinary investigation of individual differences in facial expressivity.

3:50– 4:30 Antoine Coutrot

Facial emotion recognition tests should be country-dependent.

Measures of social cognition have now become central in neuropsychology, being essential for early and differential diagnoses, follow-up, and rehabilitation in a wide range of conditions. Most cognitive tasks have been developed in highly specific contexts, often by occidental academics, which is likely to induce biases when comparing the social cognition of people across different cultures. To quantify this bias, we built a large network of neuropsychologists across 18 sites, and assessed core aspects of social cognition in 587 participants from 12 countries. In particular, we used pictures from the facial affect test (Ekman & Friesen,1976) to assess Facial Emotion Recognition, one of the most used tasks to examine social cognition among neuropsychiatric populations. We show that after controlling for age, gender, and education, more than 20% of the remaining variance in Facial Emotion Recognition scores can be attributed to differences between nationalities. We isolated participants’ nationality from potential translation issues, which classically constitute a major limitation. These findings highlight the need for important methodological shifts to better represent social cognition in both fundamental research and clinical practice, especially within emerging international networks and consortia.

4:30– 5:10 Roberto Caldara

Myths and limits of the recognition of facial expressions of emotion

Organised by

Sales Ended