Poker Faces: What makes it hard to read other people’s emotions?

By Prof. Seana Coulson

Everyone knows smoking cigarettes is bad for you – but could they be bad for your poker game?

Recent work from my lab suggests that having an object in your mouth can make it harder to read other people’s emotions. Emotion researchers have suggested that when people look at faces, they sometimes mirror their expressions in order to understand how the other person feels. This process is called simulation, and it is thought to involve the brain regions that control the experience of emotion. To explore the importance of these processes, we tested whether blocking people’s ability to simulate emotional faces makes it more difficult to understand them.

To zoom in on how hard it was to understand emotional faces, we used a technique known as event related brain potentials, or ERPs. Participants wore an electrode cap that recorded the electrical activity in their brain while they looked at emotional faces.


Over the years, cognitive neuroscientists have studied these ERP components and been able to relate them to different aspects of brain function. For example, the N400 is a negative peak in the brainwaves that is largest 400 milliseconds after the presentation of a face. The N400 reflects brain activity associated with retrieving knowledge about the face. In the case of emotional faces, understanding a smiling face involves activating the information that the person is happy, while understanding a frowning face involves activating the information that the person is angry.

The reason we used the N400 in our study was that its size is related to how hard it is to understand a face. When the meaning of a face is hard to retrieve, the N400 is larger. For example, if you compare the size of the N400 elicited by a face presented after a related face – for example, a picture of Bill Clinton after a picture of Hillary Clinton – the N400 is smaller than when the face occurs after an unrelated face – for example, after a picture of a professional athlete like Steph Curry.

Our hypothesis was that mirroring emotional faces helps us understand them. Knowing that the N400 is larger in conditions where it’s more difficult to derive the meaning of a face, we decided to compare the N400 under conditions that would make mirroring either more or less difficult.

Our next challenge was to decide how to vary the difficulty of mirroring. One way previous investigators have used to ask people to hold a pen in their mouth. When we tried this with a few pilot subjects, though, they complained that the pens were a bit too thick, and also that they tasted bad. While eating lunch at a Chinese restaurant on campus, my student Josh got the bright idea to try chopsticks. They were a bit thinner than the pens, and they tasted a lot better.

A ramen. By aungkarns.
You never know when a good idea might strike!

In our study, we compared the brain response to faces in two experimental conditions – that is while people did two different kinds of things with their faces. In the interference condition, people held the chopsticks between their teeth. This caused a lot of activity in the facial muscles we use for certain kinds of emotional expressions, especially those like smiling and disgust that use the lower half of the face. However, because it’s rather distracting to have something in your mouth, we figure that alone might make understanding faces more difficult.

So rather than using a control condition where people looked at faces normally, we asked them to hold the chopsticks loosely between their lips. Although this was similar to the interference condition in that it was distracting, people’s facial muscles were more relaxed so that they could still simulate the faces they saw.

Figure from Davis, Winkielman, & Coulson (2017) showing the two different facial positions that experimental participants assumed

Participants in our study wore electrode caps and sat in a dark room with chopsticks in their mouths. They were asked to look at a series of faces, some were happy, some were pleasantly surprised, some were angry, and some were disgusted. After each face, participants judged how good or bad the face was.

We looked at voltage changes to each of the different kinds of faces (happy, surprised, angry, and disgusted), and compared the size of the N400 in the interference condition versus the N400 during the control condition. For smiling faces, as predicted, the N400 was clearly larger during the interference condition. There was a similar finding for disgust faces, though not quite as strong. People had more trouble understanding these faces during the interference condition.

We had included the angry faces as a kind of control condition, because anger is mostly expressed in the muscles in the brow. Since it was still possible to simulate an angry face while holding a chopstick between the teeth, we predicted the N400 to angry faces would be the same in the interference condition and the control.

It was. This told us our two conditions were basically matched for how distracting they were.

One finding was somewhat unexpected – the surprise faces elicited similar sized N400 in both conditions. The surprise faces showed models with their mouths slightly open, so we might have expected participants to recruit their own mouths to simulate them. From the outset, though, this condition was a bit of a wild card. We had included it because we wanted half of our faces to express a ‘good’ emotion (and these were the sort of face you might make if you found out your partner just bought you that puppy you’ve been wanting), and half a ‘bad’ one (like anger or disgust). However, previous investigators had tested surprise faces in a similar experiment and found that these sorts of facial interference paradigms didn’t impact behavioral measures of face processing.

Piotr Winkielman, the person on our team who knows the most about emotional processing, came up with the idea of running our stimuli (the pictures of faces our participants viewed) through an artificial intelligence system for recognizing faces. The system is called CERT, for Computerized Emotion Recognition Toolbox, and it takes an image as a face as input, and outputs a list of codes that describe the muscle movements needed to make the face (see This analysis suggested that the critical information in our surprise faces was actually in the upper part of the face – especially around the eyes – so that they could compensate for difficulty mirroring the model’s open mouths by widening their eyes.

Another possibility is that people are more likely to use simulation when a task is difficult, and judging that these pleasant surprise pictures were good rather than bad was so easy that people didn’t need to simulate. Josh is currently following up on this hypothesis in his dissertation research in experiments where people judge the emotion in faces that are more or less expressive. If he’s right, then blocking simulation, with say cigarettes, might have no impact on your ability to read a very expressive person, but make it more difficult to read someone with a good poker face.

Davis, J. D., Winkielman, P., & Coulson, S. (2017). Sensorimotor simulation and emotion processing: Impairing facial action increases semantic retrieval demands. Cognitive, Affective, & Behavioral Neuroscience, 17(3), 652-664. Reprint.

Featured Image: Poker Faces by Mattie B