Social Engineering Blogs

An Aggregator for Blogs About Social Engineering and Related Fields

The Humintell Blog November 1, 2017

Nonverbal Cues in the 21st Century

The more impersonal communication gets, the more we remember the need for personal contact.

While technology has many great features, it can often distill communication down to text messages, emails, or instant messages. These really can help manage spread out workforces or enable people to work from home, but they also prevent us from reading each other’s nonverbal behavior. This does more than just prevent effective communication and can even prevent the development of trusting and empathetic relationships.

A 2012 study found that when comparing impersonal communication with face to face interaction, there were measurably different neurological responses in the brain. Moreover, the study authors concluded that the neurological effects unique to face to face dialogue may be crucial to successful interactions.

These neurological findings fit closely with the first hand experiences of a variety of entrepreneurs. For instance, Max Brown, the founder of Silicon Beach Trust emphasized the trust building aspects of in-person interaction: “Overall the biggest value of face time is that it’s really the only legitimate way to build trust with someone.”

This notion of trust proved crucial to other testimonials. Anna Barber, the managing director for Techstars, stressed the need for trust to mediate possible interpersonal conflicts. Barber contended that without trust “you won’t have a basic mutual empathy and understanding to fall back on when you hit the inevitable bumps that arise.”

Barber also emphasized that creative problem solving is much better employed while in the same room than when relying on phone calls or emails.

With such a wealth of benefits for in-person communication, it is a little concerning to see a tendency towards less personal methods of cooperation. However, the notion that all young people eschew conversation in favor of texting doesn’t seem to be correct.

Perhaps surprisingly, a 2016 survey found that 55 percent of millennials actually do prefer in person communication! That said, this is not a particularly overwhelming majority.

Followers of this blog will have already made the connection between in-person communication and either nonverbal behavior or microexpressions. We have found repeatedly that both are critical in really understanding a person, either by recognizing their underlying emotional states or by telling more effectively if they are lying to us.

While we cannot help you emphasize in-person communication, check out our past blog here about the power of reading into the sound of a voice, or just get better at handling the face to face conversations that are so important.

Filed Under: Nonverbal Behavior, Technology

The Humintell Blog May 31, 2017

Emotion in an Artificial Intelligence World

It is an almost omnipresent fear these days that technology is degrading human connections, but could we leverage that same technology to foster closer emotional ties?

We already have enough trouble reading each other’s emotions, and this becomes even harder when we communicate over long-distances, whether through email, phone calls, or even video chat. We are not face to face and cannot develop that emotional recognition, as technology, distance, and distracting stimuli add up to interfere with our emotional connections.

Instead, many people compensate with uniform online expressions, like emojis or acronyms conveying laughter or sadness, such as the classic LOL (Laughing Out Loud). While these can make progress in expressing emotions, they cannot replace actual, in-person laughter and lack a great deal of emotional nuance. Moreover, the problem seems intractable, given that modern society forces us to communicate over great distances in order to stay in touch.

That’s where developers like True Emoji, Affectiva, and Brain Power come in. These companies are using a concept called Emotion AI which attempts to give computers and other forms of artificial intelligence the ability to recognize and understand human emotions. It was Affectiva that developed a form of Emotion AI, and this software has been used in fascinating ways to teach computers to recognize facial expressions and understand emotions.

True Emoji has used this to address the problem of online communication. They have developed an app that reads a user’s facial expression and develops corresponding personalized emojis. This allows users to select an emoji that fits with their own emotion, rather than relying on a clichéd list of preset icons.

As True Emoji CEO Sumesh Dugar points out “The Internet has created a huge divide between emotions and communications… How many times have you shown surprise just by sending an emoticon?”

Similarly, Brain Power has put this principle to use trying to teach autistic children how to better recognize emotions. While autistic children have trouble recognizing emotion, this does not mean that they lack empathy. As we discussed in a previous blog, this just means that they can benefit from being taught emotional recognition skills!

Brain Power’s work focuses on the creation of interactive games that respond to the emotions of the participants. Joey Salisbury, the Brain Power director of software development, describes these as “augmented reality games with the goal of creating new, engaging ways for families to play together and grow closer while learning about emotions.”

These are just two applications of this burgeoning field. By abandoning an age-old divide between technology and emotion, we can allow the two to build on each other, creating emotionally intelligent machines and using technology to foster social ties.

For more information on this subject, see our previous blogs here and here.

Filed Under: Emotion, Science, Technology

The Humintell Blog March 23, 2017

Can We Learn Empathy from Robots?

Many people familiar with science fiction tend to have an ingrained fear and repulsion at what are seen as cold and unfeeling robots.

The idea of widespread artificial intelligence brings to mind terrifying visions from films such as The Terminator or The Matrix, both of which present an apocalyptic future where artificial intelligence turns on mankind with disastrous results. The basic concern seems to be that robots lack any sense of empathy towards their human creators.

However, many humans already struggle with empathy, and this problem is especially poignant in the field of medicine. Unfortunately, many patients struggle to effectively communicate their pain to doctors, the very people who are able to treat it. Granted, pain is a difficult thing to communicate, but there is some evidence that doctors are even worse at recognizing it than the general population.

This may be born out of necessity, as medical professionals are required to distance themselves emotionally from patients in order to conduct treatments in a scientific and objective fashion. That said, it creates problems in trying to understand and diagnose pain conditions.

Dr. Laurel Riek, a professor of computer science at the University of California, San Diego, actually sought to test whether doctors could properly recognize emotional expressions in their patients. In fact, when medical experts and laypeople were exposed to digitally simulated facial expressions, the clinicians proved to be much less accurate at recognizing pain.

While the study analyzed various emotions, including anger and disgust, recognition of pain represented the starkest disparity between the groups. Only 54 percent of medical professionals successfully identified pain as opposed to an 83 percent success rate for laypeople.

This experiment managed to simulate facial expressions, not from images of actual humans, but from computer generated imagery and an actual robot. This robot was created by analyzing a vast video archive depicting human expressions and using face-tracking software to graft those expressions onto the uncannily realistic rubber face of the robot, named Philip K. Dick.

Now, Dr. Riek is trying to use robots like Philip K. Dick to teach doctors how to better understand emotion. There is some precedent for this, as clinicians have often used robots as practice dummies for learning medicine.

But she has pointed out a major flaw in the use of these robotic training tools: “These robots can bleed, breathe, and react to medication… They are incredible, but there is a major design flaw – their face.” She explains that facial expressions are critical in communicating pain to doctors, not just in interacting with the patient but also in quickly diagnosing strokes or adverse reactions to medication.

This entire enterprise may strike many readers as highly ironic, given the cold, calculated image that science fiction has given us for artificial intelligence. Even the robot’s namesake was a prolific writer who dealt with the problem of robots’ lack of empathy. However, Dr. Riek’s work demonstrates how many varied applications such a powerful technology can have on better understanding emotions and facial expressions.

For more research on empathy and facial recognition, check out our past blogs here and here.

Filed Under: Emotion, Technology

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 7
  • Next Page »

About

Welcome to an aggregator for blogs about social engineering and related fields. Feel free to take a look around, and make sure to visit the original sites.

If you would like to suggest a site or contact us, use the links below.

Contact

  • Contact
  • Suggest a Site
  • Remove a Site

© Copyright 2025 Social Engineering Blogs · All Rights Reserved ·