Social Engineering Blogs

An Aggregator for Blogs About Social Engineering and Related Fields

The Humintell Blog April 4, 2017

How Our Brains Read People

For centuries, philosophers and scientists have struggled to discover the mysterious origins of human language.

A growing body of psychological research may have developed a neurological answer to this question, finding that language is closely intertwined with our abilities to empathize with and understand other people. Not only does recent scholarship suggest that our language and empathy have shared roots, but also that these roots are embedded in specific neurons in the brain: mirror neurons.

Mirror neurons are essentially special types of brain cells that are triggered when viewing the actions of other individuals. For example, when we see another person fall and hurt themselves, our mirror neurons trigger the part of our own brain that would be activated if we had fallen.

These brain cells were first discovered in macaque monkeys in the 1980s by Dr. Giacomo Rizzolatti at the University of Parma, Italy. After hooking up electrodes to the monkeys’ brains, Dr. Rizzolatti found that when one monkey watched another grasp a peanut, the some of the same neurons fired in both subjects’ brains.

Later research found similar brain cells in humans, and Dr. Rizzolatti began connecting mirror neurons with our ability understand other people’s emotions and feel empathy for them. In fact, some studies have found that people living with autism, which is characterized by a lessened capacity for understanding other people’s emotions, have impaired mirror neuron structures.

But what about language? Ever since the discovery of mirror neurons, scientists like Dr. Rizzolatti have investigated their connection with the development of language. They found that the area of the brain associated with speech were also necessary for our understanding of other people’s physical actions.

More recently, Dr. Michael Corballis, a psychologist at the University of Auckland, New Zealand, published his auspiciously named The Truth about Language, arguing that language emerged from our instinctive desire to gesture at external objects.

His argument is that when primates gesture at the world around them, they are inherently communicating with fellows, directing their companions’ attention towards a given object of interest. This naturally ties into the way our brains instinctively mirror the actions of others through mirror neurons, enabling these gestures to communicate at the neurological level.

This argument does not diminish the incredible complexity of language, instead it clarifies the notion that communication is inherently interpersonal and deeply rooted in our brains. In fact, some mirror neuron experts argue that, not only are they deeply tied into language, but that they are behind many extraordinary human abilities.

For example, Dr. Vilayanur Ramachandran, of the University of California, San Diego, credits mirror neurons for the explosion of human culture around 50,000 years ago, known as the “great leap forward,” because it enabled collective action and cooperation on a large scale.

While many psychologists are incredibly excited at the promising field of mirror neuron study, it is also important to note that there are many skeptics.  Dr. Christian Jarrett, who writes extensively on psychological issues, called mirror neurons “the most hyped concept in neuroscience” in a 2012 article.

Dr. Jarrett contends that this sort of investigation remains highly controversial and disputes the idea that mirror neurons inspired language, empathy, or culture. Instead, he argues that mirror neurons develop through experience. He maintains that our brains evolved mirror neurons alongside language and culture, rather than causing them to come into existence in the first place.

For more information on language and empathy, see our past blogs here and here.

Filed Under: Emotion, Nonverbal Behavior, Science

The Humintell Blog March 30, 2017

Emotion Recognition and Da Vinci

Is the Mona Lisa smiling?

There seems to be little doubt that we can learn a great deal through artwork, and a recent study on Leonardo Da Vinci’s famous Mona Lisa painting can help shed light on our facial recognition skills, as well.

This painting has long intrigued viewers, presenting an ambiguous facial expression that looks like a smile, despite a slight downturn in the mouth. A group of researchers at the University of Freiburg’s Medical Center sought to explore this issue, examining what the average person sees when they examine Da Vinci’s masterpiece.

Surprisingly, in almost every case, test subjects perceived an expression of happiness in the stately portrait. While many of us certainly do see the Mona Lisa as happy, what is truly surprising is the fact that this conclusion was shared by almost every single person studied. This sort of consensus is rare in scientific research.

In addition, the researchers explored some nuances of emotional recognition by presenting subjects with edited versions of the portrait, with varied expressions emphasizing either happiness or sadness. Essentially, they digitally altered the Mona Lisa’s mouth to craft four versions that had progressively more pronounced smiles and four that presented contrarily sad expressions.

The experiment proceeded in two parts. The first component simply involved exposing a series of test subjects to a copy of the Mona Lisa and the eight digitally edited versions in random orders. Subjects were asked to press buttons signifying whether the image was happy or sad, as well as reporting their level of confidence in this judgment.

The results were surprising. Not only were the original and happier versions invariably identified as smiles, but the participants were also able to more effectively judge these happy visages than the sad varieties. There were both more confident that what they saw was happiness and made those determinations more quickly than they did for the sad variations.

Emanuela Liaci, a PhD student and first author of the report, explained this result, saying: “It appears as if our brain is biased to positive facial expressions.”

While art critics have historically been divided on whether the Mona Lisa is smiling, a 2015 analysis of Da Vinci’s work found that he had employed a similarly ambiguous expression in at least one other painting. In both cases, a close-up of the painting reveals an uncertain expression, but viewing the painting with less focus or from a greater distance emphasized the smile.

This examination suggests that Da Vinci intended for viewers to see a smile in the Mona Lisa’s face but also to create doubt as to whether she was smiling. Could this intentional uncertainty be a reflection of the often ambiguous expressions that real people make? This would make sense, given the role of empathy that many artists see in their work.

Humintell is certainly excited to see more of this sort of research!

For more information on reading ambiguous expressions, check out our work here and here!

Filed Under: Emotion

The Humintell Blog March 23, 2017

Can We Learn Empathy from Robots?

Many people familiar with science fiction tend to have an ingrained fear and repulsion at what are seen as cold and unfeeling robots.

The idea of widespread artificial intelligence brings to mind terrifying visions from films such as The Terminator or The Matrix, both of which present an apocalyptic future where artificial intelligence turns on mankind with disastrous results. The basic concern seems to be that robots lack any sense of empathy towards their human creators.

However, many humans already struggle with empathy, and this problem is especially poignant in the field of medicine. Unfortunately, many patients struggle to effectively communicate their pain to doctors, the very people who are able to treat it. Granted, pain is a difficult thing to communicate, but there is some evidence that doctors are even worse at recognizing it than the general population.

This may be born out of necessity, as medical professionals are required to distance themselves emotionally from patients in order to conduct treatments in a scientific and objective fashion. That said, it creates problems in trying to understand and diagnose pain conditions.

Dr. Laurel Riek, a professor of computer science at the University of California, San Diego, actually sought to test whether doctors could properly recognize emotional expressions in their patients. In fact, when medical experts and laypeople were exposed to digitally simulated facial expressions, the clinicians proved to be much less accurate at recognizing pain.

While the study analyzed various emotions, including anger and disgust, recognition of pain represented the starkest disparity between the groups. Only 54 percent of medical professionals successfully identified pain as opposed to an 83 percent success rate for laypeople.

This experiment managed to simulate facial expressions, not from images of actual humans, but from computer generated imagery and an actual robot. This robot was created by analyzing a vast video archive depicting human expressions and using face-tracking software to graft those expressions onto the uncannily realistic rubber face of the robot, named Philip K. Dick.

Now, Dr. Riek is trying to use robots like Philip K. Dick to teach doctors how to better understand emotion. There is some precedent for this, as clinicians have often used robots as practice dummies for learning medicine.

But she has pointed out a major flaw in the use of these robotic training tools: “These robots can bleed, breathe, and react to medication… They are incredible, but there is a major design flaw – their face.” She explains that facial expressions are critical in communicating pain to doctors, not just in interacting with the patient but also in quickly diagnosing strokes or adverse reactions to medication.

This entire enterprise may strike many readers as highly ironic, given the cold, calculated image that science fiction has given us for artificial intelligence. Even the robot’s namesake was a prolific writer who dealt with the problem of robots’ lack of empathy. However, Dr. Riek’s work demonstrates how many varied applications such a powerful technology can have on better understanding emotions and facial expressions.

For more research on empathy and facial recognition, check out our past blogs here and here.

Filed Under: Emotion, Technology

  • « Previous Page
  • 1
  • …
  • 51
  • 52
  • 53
  • 54
  • 55
  • …
  • 67
  • Next Page »

About

Welcome to an aggregator for blogs about social engineering and related fields. Feel free to take a look around, and make sure to visit the original sites.

If you would like to suggest a site or contact us, use the links below.

Contact

  • Contact
  • Suggest a Site
  • Remove a Site

© Copyright 2025 Social Engineering Blogs · All Rights Reserved ·