Poker Faces: What makes it hard to read other people’s emotions?

By Prof. Seana Coulson


Everyone knows smoking cigarettes is bad for you – but could they be bad for your poker game?

Recent work from my lab suggests that having an object in your mouth can make it harder to read other people’s emotions. Emotion researchers have suggested that when people look at faces, they sometimes mirror their expressions in order to understand how the other person feels. This process is called simulation, and it is thought to involve the brain regions that control the experience of emotion. To explore the importance of these processes, we tested whether blocking people’s ability to simulate emotional faces makes it more difficult to understand them.

To zoom in on how hard it was to understand emotional faces, we used a technique known as event related brain potentials, or ERPs. Participants wore an electrode cap that recorded the electrical activity in their brain while they looked at emotional faces.

ERP-infographic.png

Over the years, cognitive neuroscientists have studied these ERP components and been able to relate them to different aspects of brain function. For example, the N400 is a negative peak in the brainwaves that is largest 400 milliseconds after the presentation of a face. The N400 reflects brain activity associated with retrieving knowledge about the face. In the case of emotional faces, understanding a smiling face involves activating the information that the person is happy, while understanding a frowning face involves activating the information that the person is angry.

The reason we used the N400 in our study was that its size is related to how hard it is to understand a face. When the meaning of a face is hard to retrieve, the N400 is larger. For example, if you compare the size of the N400 elicited by a face presented after a related face – for example, a picture of Bill Clinton after a picture of Hillary Clinton – the N400 is smaller than when the face occurs after an unrelated face – for example, after a picture of a professional athlete like Steph Curry.

Our hypothesis was that mirroring emotional faces helps us understand them. Knowing that the N400 is larger in conditions where it’s more difficult to derive the meaning of a face, we decided to compare the N400 under conditions that would make mirroring either more or less difficult.

Our next challenge was to decide how to vary the difficulty of mirroring. One way previous investigators have used to ask people to hold a pen in their mouth. When we tried this with a few pilot subjects, though, they complained that the pens were a bit too thick, and also that they tasted bad. While eating lunch at a Chinese restaurant on campus, my student Josh got the bright idea to try chopsticks. They were a bit thinner than the pens, and they tasted a lot better.

Untitled
A ramen. By aungkarns.
You never know when a good idea might strike!

In our study, we compared the brain response to faces in two experimental conditions – that is while people did two different kinds of things with their faces. In the interference condition, people held the chopsticks between their teeth. This caused a lot of activity in the facial muscles we use for certain kinds of emotional expressions, especially those like smiling and disgust that use the lower half of the face. However, because it’s rather distracting to have something in your mouth, we figure that alone might make understanding faces more difficult.

So rather than using a control condition where people looked at faces normally, we asked them to hold the chopsticks loosely between their lips. Although this was similar to the interference condition in that it was distracting, people’s facial muscles were more relaxed so that they could still simulate the faces they saw.

Untitled
Figure from Davis, Winkielman, & Coulson (2017) showing the two different facial positions that experimental participants assumed

Participants in our study wore electrode caps and sat in a dark room with chopsticks in their mouths. They were asked to look at a series of faces, some were happy, some were pleasantly surprised, some were angry, and some were disgusted. After each face, participants judged how good or bad the face was.

We looked at voltage changes to each of the different kinds of faces (happy, surprised, angry, and disgusted), and compared the size of the N400 in the interference condition versus the N400 during the control condition. For smiling faces, as predicted, the N400 was clearly larger during the interference condition. There was a similar finding for disgust faces, though not quite as strong. People had more trouble understanding these faces during the interference condition.

We had included the angry faces as a kind of control condition, because anger is mostly expressed in the muscles in the brow. Since it was still possible to simulate an angry face while holding a chopstick between the teeth, we predicted the N400 to angry faces would be the same in the interference condition and the control.

It was. This told us our two conditions were basically matched for how distracting they were.

One finding was somewhat unexpected – the surprise faces elicited similar sized N400 in both conditions. The surprise faces showed models with their mouths slightly open, so we might have expected participants to recruit their own mouths to simulate them. From the outset, though, this condition was a bit of a wild card. We had included it because we wanted half of our faces to express a ‘good’ emotion (and these were the sort of face you might make if you found out your partner just bought you that puppy you’ve been wanting), and half a ‘bad’ one (like anger or disgust). However, previous investigators had tested surprise faces in a similar experiment and found that these sorts of facial interference paradigms didn’t impact behavioral measures of face processing.

Piotr Winkielman, the person on our team who knows the most about emotional processing, came up with the idea of running our stimuli (the pictures of faces our participants viewed) through an artificial intelligence system for recognizing faces. The system is called CERT, for Computerized Emotion Recognition Toolbox, and it takes an image as a face as input, and outputs a list of codes that describe the muscle movements needed to make the face (see https://en.wikipedia.org/wiki/Facial_Action_Coding_System). This analysis suggested that the critical information in our surprise faces was actually in the upper part of the face – especially around the eyes – so that they could compensate for difficulty mirroring the model’s open mouths by widening their eyes.

Another possibility is that people are more likely to use simulation when a task is difficult, and judging that these pleasant surprise pictures were good rather than bad was so easy that people didn’t need to simulate. Josh is currently following up on this hypothesis in his dissertation research in experiments where people judge the emotion in faces that are more or less expressive. If he’s right, then blocking simulation, with say cigarettes, might have no impact on your ability to read a very expressive person, but make it more difficult to read someone with a good poker face.


Davis, J. D., Winkielman, P., & Coulson, S. (2017). Sensorimotor simulation and emotion processing: Impairing facial action increases semantic retrieval demands. Cognitive, Affective, & Behavioral Neuroscience, 17(3), 652-664. Reprint.

Featured Image: Poker Faces by Mattie B

Becoming a Research Assistant: The Staple of my College Career

By David Liau


Disclaimer: writing a blog post has always been my greatest weakness. I’ve always been good at verbal presentation; I’ve had no problems with standing up in front of a large crowd and conveying my point. But penned artifacts are a different matter… so please bear with me!


Many opportunities in college provide people with experience in either schoolwork, social life, or graduate school life. Some may even cover two of the areas, mixing in social life with graduate school life or schoolwork with graduate school life. But in my opinion, very rarely does an event encompass all three. I’ve had the privilege of finding one activity that miraculously does combine these three topics, and in convincing fashion– my position as a research assistant in the Language and Cognition Lab.

Prior to joining, I had never heard of this lab before. On a whim, my friend Ya-han and I decided to attend a linguistics event, hoping to learn a little more about what we wanted (or didn’t want) to pursue as a career. What we didn’t know was that the supposed event was actually a lab recruitment meeting, and the presenters were graduate students.

Now I’m not going to say that this was the “fated moment” where I met my advisor and I immediately knew I was going to join her. In all honesty, I was more focused on a different lab (I can’t exactly remember which), with a secondary interest in my current lab. But when I spoke to that graduate student, whose name was Rose, I found out that they were performing experiments that needed Mandarin speakers to help in the lab. Ya-han and I both spoke Mandarin, so we thought, “why not?” and proceeded to sign up. Little did we know that joining Rose would lead to one of the most consistent events in my college life.

The first job Ya-han and I completed upon starting our internship was arguably the hardest. Our research experiment essentially aims to discern the difference between English and Mandarin speakers’ perception of time and space. These differences in culture manifest themselves within metaphors in both speech and writing. Whereas the English language is read naturally from left to right (as you are reading it now), the Mandarin language is (or at least used to be) read from top to bottom. Consequently, English gestures commonly refer to the speaker’s left as prior, and the right as future. Meanwhile, the Mandarin language is composed of metaphors that detail past as upward and future as downward (上個月, 下個月).

Untitled

This seemingly minor difference between the two languages led to Rose’s research question: if the English language style allows English speakers develop a perception of the past as to the left and the future towards the right, does that mean that Mandarin speakers perceive of up as in the past and down as the future?

To test this, Rose created an experiment that would evaluate English and Chinese speakers’ interpretations of time. Each participant would see two pictures sequentially, with the second picture displaying an event that clearly occurred prior to or following the first picture. The participant would press one of two buttons, which were systematically placed “up” and “down” the keyboard, that matched the second picture’s chronological placement (before or after). This portion of the experiment is well known as the Orly task. Then, Rose would compare the reaction times of the Mandarin participants to the English participants, and determine which had a faster recognition time.

Ya-han and I oversaw the Mandarin experiment. Because we wanted to maintain the Mandarin speakers in a non-English environment, having two proctors who spoke only Chinese was the most optimal way of running the experiment. To do this, Ya-han and I compiled a list of instructions and translated them into a Mandarin script, which was modified and repeated until adequate.

Running the experiments at first was extremely challenging. Just setting up the experiment required a myriad of memorized factors—and this didn’t even include running it! But slowly, Ya-han and I got more used to the process. Nowadays, I go in every week to run participants, looking forward to the point where we have enough data to analyze.

Untitled

So, who are Rose and Ya-han you ask? Rose Hendricks is my graduate advisor, one of the best of the best. Not only does she excel in public speaking and linguistics research, but she’s also the loveliest person I know. I can tell that she’s truly passionate about her research on metaphors just through the everyday conversations that we have.

Ya-han is my research partner, someone who was with me since day one. We met each other through a mutual friend, and I was surprised that she was also a cognitive science major. From then on, we attended most events related to Cognitive Science together, from the initiation of our research position to presenting our findings with Rose at the Cognitive Science Conference this year. We’ve always bounced ideas off each other and used these opportunities to grow in our respective specialties.

My relationships with Rose and Ya-han deeply integrated themselves into my college career. My communication with them has been an important factor in getting through many hardships, not only in the lab but also in life. Through them and this position, I’ve gotten a slight taste of the research life.

Part of what I’ve learned about scientific research (especially in the graduate scene) is that it’s not nearly as easy as I imagined it to be. There are continually factors that impact scheduling and planning of experiments, ranging from the proctor’s availability conflicts to the participants’ unexcused absences. Then, just as a certain amount of data has been collected, there is always the impending fear of miscalculated statistics or miniscule details that render the data useless.

Untitled

Not everything is bad, however. I’ve learned how to gather and analyze large amounts of data through the continuous trials of the experiment, as well as communicate properly with both fellow researchers and participants. Through training new lab assistants, I’ve developed a comfortability with being a leader when needed, as well as a more amicable character for conversing with strangers.

This lab that I’ve been part of the past two years has both broadened my horizons and sharpened my interests. I’ve received a wealth of knowledge through working with Rose and Ya-han, and I look forward to more training in data analysis and statistical tests. The first thing I think of when people ask about my college experience is unequivocally the Linguistics and Cognition Lab, and I hope it will stay so for the last years of my college journey.

 

When do children start to understand words such as “yesterday” and “tomorrow”?

By Xirui He [Original research by Tillman, Marghetis, Barner, & Srinivasan, 2017]


How much do children (age3 to 8) understand deictic time words in terms of deictic status, sequential order and remoteness?

To learn a language, children get their source of information from the events that the words refer to and the linguistic context in which the words appear. They make hypothesis about word meanings and verify if their hypothesis is correct. Deictic time words are more difficult for children to learn because these words are more abstract, and require more acquaintances to understand them. There are three facets of understanding deictic time words: deictic status, sequential order, and remoteness.

Deictic status is understanding yesterday as the past and tomorrow as the future. Sequential order is for example putting 10 days ago, 3 days ago, yesterday, now, tomorrow in the right order. Remoteness is the relative distance the words are represented, for example 10 days ago is farther than yesterday from now. Children start to produce deictic time words from an early age, although they might not use them correctly like adults. How much do children age 3 to 8 understand deictic time words and their deictic status, sequential order, and remoteness?

How was it addressed?

The researchers study children’s mental study of deictic time words by evaluating how they represent these words (such as yesterday and tomorrow) on a left-to-right timeline. The participants are children from age 3 to 8 with 16 children in each age category. The material that the researchers used were colored pencils and papers with horizontal timelines printed on it. The researchers asked children to indicate where on the timeline an event is associated with a specific time period.

After collecting all the data, the researchers did a four-step analysis. First, they assessed the children’s understanding of deictic status, sequential order, and temporal remoteness. Next, they determined the typical ages of acquisition of facets of meaning. Then, they calculated contingencies between adult-like knowledge of these facets of meaning. Lastly, they studied the connection between the children’s performance on the timeline task and their ability to answer non-spatial, verbal forced choice questions about deictic time words.

What did the researchers find?

The researchers found that the accuracy of the words’ deictic meaning increased by age. 7-year-old performance was very similar to adults. Knowledge of relative sequential order also increased with age. 8-year-old performance was like adults. Knowledge of temporal remoteness (relative distance from “now”) also improved gradually with age, but more slowly than deictic meaning and sequential order. 8-year-olds performed like adults.

Untitled

By looking at the graphs showing the relationship between ages and accuracy, the researchers found that knowledge of deictic status and sequential order are highly linked, but not remoteness. Children’s performance on verbal questions showed that 7-year-old abilities were indistinguishable from adults. This suggests that the timeline task is a valid measure of semantic knowledge.

What does the finding mean? Why is it important?

The finds help us understand the developmental process in which deictic status, order and remoteness emerge in children, and how they acquire the knowledge. Their results show that temporal remoteness is developed independently and much later. Children understand deictic status and order before remoteness indicates that they initially draw on linguistic context to constrain their early hypothesis about word meanings. This is different from researchers’ hypothesis that children initially start to understand these words by event memories and make inference about meanings of deictic time words associated with those events.

Children start using deictic time words beginning at an early age, they have partial understanding of these words even though they use them in incorrect ways. There is a long delay between when they start using deictic time words and using them correctly.

 

Syringe: How much pain do you feel when you read the word syringe?

By Diana G De La Pena [original research by Reuter, Werning, Kuchinke, & Cosentino, 2016]


Has it ever happened to you that you see someone cut themselves or get hurt and you somehow ‘feel’ the pain they are going through?

One early Friday morning, I was reading the newspaper and the front-page headline talked about a woman who had slipped, cracked the ice of a lake, and fallen into ice cold water. Shortly after reading the story and out of nowhere, my body started feeling achy, shivery, and very cold as if it had been the one who had fallen into the lake too! I have never fallen into a lake of ice cold water, I certainly avoid anything to do with cold weather. Can this be possible, my body reacting to the simple words printed in this newspaper!?

Well, researchers used participants from Rhur University Bochum to try and answer this question. Their aim is to explain whether individual differences in pain sensitivity influence the cognitive processing of words measured via people’s ratings of pain-relatedness to a given word. They are interested to see if pain sensitivity is any different when processing abstract nouns versus concrete nouns.

I briefly present the different frameworks, different type of words used, and the end results discussed in the article, that attempt to explain the individual’s differences of language processing when dealing with pain-related/emotion-related stimuli.

Frameworks

  • Cognitive bias – individuals that have specific inclinations, demonstrate a specific cognitive bias towards stimuli that is closely related to their preferences.
  • Prototype Analysis – this is where conceptual representations are encoded in a specific manner and the distinct features of these conceptual representations tend to be more central than other features.
  • If you have an object with a sharp tip (knife), our representation of this objects gets encoded in a specific way and the feature of the knife (sharp tip) becomes the focus compared to the rest of the features of the knife.
  • Embodied Cognition – the linguistic processing involving our perceptual, motoric, and emotional brain regions that generates individual comprehension of words.

Design

  1. Results form 130 participant were used.
  2. Before study began, asked to rate their self-assessments on pain sensitivity (yes, not so much, definitely not, and I do not know)
    Asked to report how frequently they experience pain (very rarely, now and then, quite often, chronic pain)
  3. Assembled 600 German pain words all subdivided into different categories.

Pain valence words

  • Nouns (syringe, thorn, hail, hammer, crutch, tank, snake) refer to causing pain when in contact.
  • Nouns (appendix, pus, neck, bone, scar) refer to our body parts that inflicts having pain
  • Abstract nouns (birth, emergency, epidemic, torture) refer to states that involve pain

Positive valence words

  • Syllabic nouns (eagle, spring, sapphire)

Negative valence words

  • Non-pain (wrinkle, race, spy)

Neutral valence words

  • (herring, magnifying glass, pendulum)
  1. Individuals randomly presented with list of words then they were prompted to rate their association of the words to physical pain on a scale (1 = not at all, 2 = slightly, 3 = moderate, 4 = strongly, 5 = very strongly)

Results

Individuals who self-rated as being more pain-sensitive demonstrated higher association of pain on the pain valence words compared to the individuals that self-reported as less pain sensitive.

Individual differences in pain sensitivity is associated with stronger activating of our pain matrix (Areas such as somatosensory cortex, anterior cingulate cortex, and prefrontal cortex that activate when experiencing\ pain).

It was concluded that framework number three, embodied cognition, was the best way to explain the results from the study. The differences between processing abstract and concrete words are much better explained through our embodied cognition frame of reference.

Now, going back to my little experience from the beginning, I could try to explain my reaction on the woman who fell into ice cold water, in that my pain sensitivity related to seeing the words ice cold water on body was definitely associated with high pain sensitivity. I do rate myself as a very high pain-sensitive individual.

Untitled

In the end, we see that individual differences in pain sensitivity influence the cognitive processing of words measured via self-ratings of pain-relatedness to a given word.

Could the brain actually be responsible for teenagers “acting out”?

By Sarai Ballesteros [original research by Qu, Galvan, Fuligni, Lieberman, & Telzer, 2015]


Would you ever go to a highway with your friends in two separate cars and purposely drive towards each other at full speed until one of you swerves out of the way? Neither would I. However, there are teenagers that actually do this for fun. It’s a game widely known as “chicken.” It is risky behaviors, such as these that have intrigued scientists for centuries. At last, scientists are finally finding reasons for these puzzling behaviors.

Yang Qu and colleagues searched for a biological basis for adolescent’s uniquely risky behaviors. In their paper entitled “Longitudinal Changes in Prefrontal Cortex Activation Underlie Declines in Adolescent Risk Taking” (2015), researchers found evidence that points to underdevelopment in specific brain regions in adolescents that play major roles in decision-making.

In this study, there were a total of 21 participants. All underwent testing at two different points in their lives, about one and a half years apart. The average age of the participants during the first round of testing was about 16 years old. At the second point, the average age was about 17 years old. Each participant was tested on different components to track their degrees and frequency of risky behaviors.

The first component of this study was a survey first created in 1991 named the Youth Self-Report. This questionnaire asks participants how often they smoke, drink alcohol, use drugs, and steal. At the end, the report gives the participant a score from 0-30 from least risky to most risky.

The second component was a task named the Balloon Analog Risk Task, or BART. In this task, participants were put in a Functional Magnetic Resonance Imaging machine (fMRI) and presented with a relatively simple task. They were shown a computer screen that had a red balloon. They were given the choice of either pumping the balloon with air or not. By pumping the balloon, they were running the risk of having the balloon explode in hopes of earning $0.25 with each successful pump. At any point, they also had the option to choose the safe option and cash out and keep all of their winnings. However, if the balloon exploded before they cashed out, they lost everything. With each pump, the chances of the balloon exploding increased exponentially, thus increasing the risk. The test measured how many pumps each participant employed before cashing out or popping the balloon.

Untitled

By monitoring participants in an fMRI machine while they completed this task, researchers were able to see the affects of risk on the brain. In whole-brain analyses, they were able to see increased activation in the ventrolateral prefrontal cortex (VLPFC) as well as in the medial prefrontal cortex (MPFC). These two particular areas are known to be related to the brain’s cognitive control and reward systems. Over time, the study found that activation in the VLPFC significantly decreased when making risky decisions on the BART.

Overall, this study does not definitely point to immaturity in adolescents’ brains directly leading to risky behaviors. However, this is a good start to what will likely be a long road to understanding the teenage brain.

You are all creative because neuroscience says so

By Henri Skinner


What comes to mind when you hear the word creative?

Growing up, children learn to express themselves through fairytale stories, finger paint crafts, and circle time dedicated for asking unusual questions. Everything they are told to be is NEW and ACCESSIBLE.

No child has ever doubted their ability to think new thoughts.

The sad reality is that as people grow older they are less likely to define themselves as creative. This in part may be because of the professions they fall into that aren’t labelled as creative. Creativity becomes a mystical gift that only some lucky people have inherited and keep locked up in art museums and music halls. Whatever the cause, neuroscience argues that creativity is greater than careers, but that it is an integral part of our physiology, of ourselves!

A common way to describe creativity is the ability to produce work that is both novel and appropriate.

Neuroscientists use this definition in order to ask questions such as
“Is there such a thing as creative an non-creative people?”
“What about creative and non creative tasks?”

PSA on creativity research

Let’s get real folks. Creativity has a bit of a branding problem in the realm of a laboratory setting. The main problem is the fact that testing for creativity usually involved testing a pool of “creative people” and “non-creative people” as a control. These distinctions are usually decided by the career choices of people which IS SO BAD. It implies that,

1. There are certain careers that do not necessitate creative thought
2. People are holistically defined by their careers

The research I will focus on instead use “creative” and “non-creative” tasks in order to create a basis for comparison. Cool? Cool. Here we go.

Background Information

The brain is split up into four different sections called “lobes” that all have their own purposes. The temporal. occipital, and parietal lobe all are key in memory and perception. They are the lobes that compile all kinds of sensory information such as your sight, your experiences, your sense of hearing, etc. They are basically the data source for your frontal lobe.

Your frontal lobe is basically the boss of higher cognitive thinking. It takes all the information from the other three lobes and integrates them into complex processing like emotional thoughts, decision making, and creativity. Because of this unique job of the frontal lobe, the rest of the information I’ll be sharing will be about specific experiments on it’s activity.

Screen Shot 2017-05-29 at 7.15.44 PM.pngTesting the Frontal Lobe

sweet so we have the tool! now how do we use it?

Untitled
Image from Thirteen of Clubs

Anna Abraham discusses how hard it could be to contextualize creativity and along with conclusions made by neuroscientists Kroger and Rutter, created tests that highlight creativities definition of producing information that is both “unusual” and “appropriate”

Screen Shot 2017-05-28 at 6.29.53 AM.png

Their fMRI studies on “conceptual expansion” found that there were specific places in the brain that were the most active when trying to complete creative tasks. These area regions are the inferior frontal context, the temporopolar cortex and the frontopolar cortex (FPC). Neat! We found them! What is even more neat is that these ares of the brain are NOT just exclusive to creative thought. In Abraham’s paper she quotes that “the lateral FPC is not specifically limited to semantic aspects of information processing” and that “both this brain region and the anterior IFG are sensitive to the degree of associate strength between concepts with greater brain activity elicited by wider semantic distance.” AKA, these creative processes are defined as choosing and using the right kind of knowledge in our brain databases in order to create new ideas. The results of these different research avenues show activity in these brain regions is not limited to the “artist” but are actually a part of everyday thinking.

Conclusion

Abraham restates that there is no “qualitative distinctiveness” between creative and normative aspects of cognition in the brain. In a greater understanding, we cannot truly put creativity in a special box for special people, but understand it as a mechanism integral to any profession, any person. So the next time someone tells you they simply aren’t creative, you can argue that if they have functioning brain, they have every capacity to be creative.


Cited Sources

Abraham, Anna. “Creative Thinking as Orchestrated by Semantic Processing vs. Cognitive Control Brain Networks.” Frontiers in Human Neuroscience 8 (2014): 95. PMC. Web. 27 May 2017.

Kröger, S., Rutter, B., Stark, R., Windmann, S., Hermann, C., and Abraham, A. (2012). Using a shoe as a plant pot: neural correlates of passive conceptual expansion. Brain Res. 1430, 52–61. doi: 10.1016/j.brainres.2011.10.031

Rutter, B., Kröger, S., Stark, R., Schweckendiek, J., Windmann, S., Hermann, C., et al. (2012b). Can clouds dance? Neural correlates of passive conceptual expansion using a metaphor processing task: implications for creative cognition. Brain Cogn. 78, 114–122. doi: 10.1016/j.bandc.2011.11.002

Rutter, B., Kröger, S., Hill, H., Windmann, S., Hermann, C., and Abraham, A. (2012a). Can clouds dance? Part 2, An ERP investigation of passive conceptual expansion. Brain Cogn. 80, 301–310. doi: 10.1016/j.bandc.2012.08.003

Kröger, S., Rutter, B., Stark, R., Windmann, S., Hermann, C., and Abraham, A. (2012). Using a shoe as a plant pot: neural correlates of passive conceptual expansion. Brain Res. 1430, 52–61. doi: 10.1016/j.brainres.2011.10.031


Featured Image from DrOONeil

Sticks and Stones: Sensitivity to physical pain can make words “hurt” more

By Angie Wang [original research by Reuter, Werning, Kuchinke, & Cosentino, 2016]


Shard.

Pus.

Dental nerve.

If you’re like me, you probably winced as you read those words–or at least reading them made you somewhat uncomfortable. Or maybe you’re that one person who just loves words like these because of how they make other people shudder. Two kinds of people, they say.

But where does the difference lie? In a 2016 study, researchers found that people who are more sensitive to pain are more likely to associate words with greater amounts of pain than people who are less sensitive to pain.

Participants were asked “How strongly do you associate a [word] with pain?” which they rated on a 5-point scale. These words came from a list of 600 nouns, some associated with pain (e.g. knife, scar, epidemic), and others not pain-related (e.g. eagle, pendulum, wrinkle) to serve as a control. For the pain-related words, some were concrete nouns (e.g. hammer, snake) and others were abstract nouns that could refer to a state of affairs involving pain (e.g. emergency, birth). These words were presented in surveys.

After evaluating the words, participants self-reported their pain sensitivity. (Note: While the authors of the paper claim that self-assessment of pain sensitivity is valid, another study has shown otherwise). The participants were then divided into three groups: low, moderate, and high pain sensitivity. The researchers calculated the average pain-ratings for each word, and compared these values among the three groups. They found statistically significant differences between the high and moderate group, and between the high and low group. On average, people with high pain sensitivity rated the pain words like “thorn” as being more painful, while people with low pain sensitivity rated them as less painful. There was only a difference between groups for pain-related words like “fever,” not the non-pain words like “spring” which had served as a control. Finally, the study found a significant influence of pain sensitivity on word ratings for concrete words, but not abstract words.

Untitled

If individuals indeed differ in their processing of pain-related words, how do we explain these differences?

According to the embodied account, the processing of language involves visual, motor, and emotional areas of the brain corresponding to the contents being comprehended. One study has shown that when we process pain-related words, brain areas that are involved when we actually experience pain are activated. For example, as you understand the meaning of the word “needle,” visual and emotional areas of the brain involved in interactions with needles may be activated. Essentially, bodily experiences and multiple areas of the brain come together and affect the way that we comprehend (and subsequently rate) the “painfulness” of words.

Untitled

The embodied account also explains the differences between the pain-ratings of concrete and abstract words. Concrete concepts are processed in brain regions that process action, perception, and emotion (in addition to language), whereas abstract concepts are primarily processed in brain areas devoted to language. We would expect there to be a relationship between pain sensitivity and the processing of concrete pain words because they involve the same brain regions, and indeed, there was a correlation.

Furthermore, according to the principle of body specificity, individual differences in the way people experience painful stimuli leads to differences in the way they construct concepts and word meanings associated with pain. This means that people show differences in the degree to which they are “hurt by words,” based on their interactions with their physical environment.

In a basic sense: differences in pain sensitivity → differences in the activation of brain areas (when we experience pain and when we read pain words) → different processing of pain-related words.

So do we blame our brains for making us “feel” and judge words a certain way? Maybe. Or perhaps it simply boils down to our different imaginations. While it’s not surprising that we each have different levels of pain sensitivity, it is the relationship between this sensitivity and the processing of words that is intriguing. And while it’s not everyday that we give 5-point ratings to pain-related words, these ratings do demonstrate how physical pain and language–two seemingly separate spheres–are brought together.


Featured Image: Disgust. By Christopher Brown. CC