Saturday, January 4, 2014

A Tale of Two Men in a Bar




Two men walk into a bar. Each looks at a third man for no more than a second or two. The third man returns a glance to the two men. The first man impassively passes by the third man. The second man immediately gets angry, approaches the third man and gets into a loud confrontation. 

What gives? The third man - whom we'll call Pete - gave each of the first two men the exact same glance. Why were their reactions so different? We'll call the impassive man Bob and the angry man Roger. Bob further compounds the situation by saying Roger is too sensitive. Roger denies this and now is really in a huff! (I think we can deduce that Bob is the man on the left in our above illustration and Roger on the right). "I don't know why, but that Pete just always pisses me off", Roger tells Bob.

How could Pete piss off Roger with a mere glance and not Bob? What do you think; is Roger too sensitive and he overreacted? Is Bob a cool customer or is he missing something? And what did Pete's expression really mean when he returned Roger and Bob's glances? 

This is a simple scene that plays out every day in dozens of settings throughout our worlds, from kindergarten sand boxes to corporate board rooms. We're going to address a number of different brain region activities all in one go today. One is a common difference among us - individual "sensitivity". Two is going to be a bit of a primer of both how our senses work, in this case how our sense of sight works, and in this particular case a bit about how our "facial recognition software" works. Plus, as an added bonus, we'll further deepen our understanding of what is meant by each of us having our own unique "realities". 

Now, let's return to the scene in the bar. No words were exchanged at first so the auditory sense wasn't involved. Or was it? Maybe there's more there than "meets the eye". We'll exclude the sense of smell for now and no touch or taste was involved. Now these all may have played a role but we'll leave that for now. Let's just focus on the exchanged glances for now and the sense of sight.

Now you may recall from our introduction to basic brain regions in Neuroscience 101 that we don't actually "see" with our eyes. Our eyes are really just highly sophisticated light collectors. The eyes collect light (and a narrow band of the light bandwidth it is) and translate the bits of information contained in those beams of light (a rather complicated business we'll leave for another day) into electrical signals which it sends off to the brain via the optic nerve. What we experience as "vision" really originates in that region at the back of the brain called the occipital lobe. That's where it all happens, not in the eyes. So really when we say someone has a "good eye" for something, what we really mean is that they have a "good occipital lobe" for something!  

Here's where it is:



And it is in the occipital lobe that "facial recognition software" is located. Facial recognition is one of our higher evolved functions, though most primates are pretty good at it. Lower mammals less so (despite what some pet lovers would like to believe but we'll leave that for another time). Our facial recognition software would be an excellent example of being what neuroscientist David Eagleman famously calls our "zombie programs", that is it's a brain function that runs autonomously without any, or at least very little, conscious input from "you". It's also one of the very first zombie programs to come on stream after you're born. One of the first things a newborn will turn its gaze to is people's faces and, with the help of a few other brain processes, it'll very quickly learn to recognize and lock in on its mother's face.

Our facial recognition software is a very complicated process and we are not all created equal in it. There's basic stuff - like analyzing simple things like "data points" to distinguish a man from a woman - and then further stuff to distinguish Bob from Roger for example. This is basic stuff that is now becoming replicated by facial recognition software - as in the computer variety as opposed to our brain "software" - now used in places such as airports and such to pick out known terrorists as one example. So artificial intelligence has started to catch up to us humans there.

Here's an example of the artificial version. Our brain version would track similar points but in far more detail.



  
But now to get to, in part, what we mean by "sensitive". Humans are still better at detecting the wide array of facial expressions we use to display emotions. And it is here that individual people are not created equal. One would assume that we'd all recognize emotional clues from facial expressions in the same way but not so. Some of us not only collect more "data" - the finer points of varying human expressions - but we assemble it better and have a wider array of differing facial models. 

So two different people could see the exact same expression on a third person's face and literally see two different things just like we saw with our two gentlemen walking into the bar. Now, remember what I said in Neuroscience 101 about our each having our own realities? This is just a fraction of what I'm talking about. 

But - but! - it doesn't end there. 

Once the occtipital lobe assembles this "picture" of a given face, it needs to send the data off to another region for further processing. And that would be the "emotional centre". 

All sensory data - sight, sound, touch, taste and smell - gets routed through that little tandem in the heart of the limbic system, the hippocampus and amygdala. The hippocampus is responsible for "filing" data away for future reference and the amygdala is responsible for "tasting" that visual data - Pete's expression when he glanced at Bob and Roger for example - for emotional content and it will also attach an emotional value to it as well. It's the amygdala, not "you", that decides whether you should feel happy, sad, excited, glum and much so on about all the little things your sensory organs take in every second that you are conscious (in the awake sense of the word). Then, in a neat little dance between the two of them, the amygdala and hippocampus will decide whether that data should be filed away for future reference and furthermore, how much priority that data will be given in the future. 

Those little data processing partners, just to remind, look like this:



So back to the facial expression on the Pete's face and Bob and Roger experiencing that expression differently, it is largely the amygdala that differentiates the two individual experiences. One person's amygdala may "decide" that the expression in question is just "meh" and the other person's amygdala may decide that that expression is worth remembering and to make sure you remember it well, it's going to attach a good bit of anger to it (the amygdala actually works with a good deal of other brain regions - emotional regulation centres located in the frontal lobes, for example - but it is the main brain nodule responsible for emotions).  

But to further understand the scene in the bar we have to look a bit further into how Bob and Roger each got the way they were. 

Remember in Why? I said all newborns start out the same? Well, that's only partially true of course. As far as basic hardware goes, yes, each newborn will be much the same but there will be fine differences between each newborn's hardware and wiring and that of course is determined by genetics and its womb environment. 

Let's look at the regions involved in our bar scene; the occitipal lobe, amygdala and hippocampus. They'll look the same on the outside but genetics will kick their development in slightly different directions. In a "sensitive" person, they may be gifted with a more finally tuned facial recognition software system. They may also have a more active amygdala. Their hippocamus may also be slightly different. 

From there it is the individual's environmental experiences that will take over and that will go from our kindergarten sand box right up to our boardroom. From the sandbox, Roger, for example, with his genetically programmed more sensitive hardware, will not only "see" more in his playmates' facial expressions, but his amygdala will attach more emotion to them and with this higher emotional value the hippocampus will give this data higher priority in storing it away for future reference (IE: putting it in short term and long term memory banks). And this feedback loop system will run autonomously for all of Roger's life constantly re-enforcing his "sensitivity". 

Bob meanwhile started out with facial recognition software that simply didn't "see" as much of the nuances in other kids' faces and furthermore, his amygdala would be more "meh" about what he did see and further-furthermore, his hippocampus, because it's receiving a "meh, not important" message, doesn't bother filing as much facial data away. And this plays out throughout Bob's life and thus, when he saw Pete's expression in the bar, he noticed nothing. 

Roger's more highly attuned "facial recognition" software, however, saw a kaleidoscope of information and furthermore, his memory - a subconscious autonomous function, not a conscious function - remembered something about Pete's expression that Bob did not - or could not. Now what might that be?

Well, we can't be sure but I'd bet dimes to doughnuts that it has something to do with speech. We won't get into this too much today but if Roger's facial recognition software is "sensitive", I'd bet that his speech recognition software is as well. Looking at our first brain image again, that's located here, in the Wernicke's area.


Just quickly, to wrap today's lesson up, Roger's brain system quite likely at some point saw a very similar look on Pete's face at some previous point which at that time was accompanied by some nasty words perhaps. In that case, the speech recognition software would have been included in the loop we looked at above which would have attached further meaning to the facial expression. Roger doesn't exactly remember that encounter but his "sensitive" memory does and thus, without Roger really knowing why, Pete's glance at him - with perhaps an eyebrow raised a certain way - set off an alarm bell in his amygdala - "Hey! It's that "look" again!" Likely too, Roger had seen similar looks on other people in the past and perhaps had strong emotional components attached to them as well. 

And all this adds up to a mere glance within a second or two "setting off" Roger. The more passive Bob, meanwhile, because all his hardware never really developed to notice these things, well, notices nothing.

One last factor and we're done for this segment. These systems don't operate at the same rate of speed in each individual either. Roger's brain might be able to jump from face to face more quickly and deduce more. Bob's might run slower and at the moment his eyes took in Pete's expression, his facial recognition software and limbic system may have still been grappling with a face he'd exchanged glances with moments before. 

So when I say someone is "sensitive", I actually mean it as a compliment. They just "get" a lot more out of things around them. And they'll generally be more empathetic because they have "better data" to send off to their brain's empathy centre. Which is a topic for another day. 

So there we go; a look at how some simple exchanged glances in a bar get processed differently and some further understanding of how each of our individual "realities" are created - all in one fell swoop. 

Sources:



And scholarly stuff such as this.


Thanks as always for reading along. I hope it was insightful for you and we'll see you next time!


1 comment:

  1. This is a really informative knowledge, Thanks for posting this informative Information. Face Recognition

    ReplyDelete