Whether you are playing poker or haggling over a deal you might think that you can hide your true emotions.
But telltale signs can reveal that you are concealing something, and now researchers at Oxford University and Oulu University are developing software that can recognise these ‘micro-expressions’ - which could be bad news for liars.
‘Micro-expressions are very rapid facial expressions, lasting between a twenty-fifth and a third of a second, that reveal emotions people try to hide,’ Tomas Pfister of Oxford University’s Department of Engineering Science tells me.
‘They can be used for lie detection and are actively used by trained officials at US airports to detect suspicious behaviour.
‘For example, a terrorist trying to conceal a plan to commit suicide would very likely show a very short expression of intense anguish. Similarly, a business negotiator who has been proposed a suitable price for a big deal would likely show a happy micro-expression.’
Tomas is leading efforts to create software that can automatically detect these micro-expressions - something he says is particularly attractive because humans are not very good at accurately spotting them.
He explains that two characteristics of micro-expressions make them particularly challenging for a computer to recognise:
Firstly, they are involuntary: ‘How can we get human training data for our algorithm when the expressions are involuntary?’ he comments. ‘We cannot rely on actors as they cannot act out involuntary expressions.’
The second big problem is that they occur for only a fraction of a second: this means that, with normal speed cameras, they will only appear in a very limited number of frames, leaving only a small amount of data for a computer to go on.
The researchers looked to tackle the first problem by an experiment in which those taking part were induced to suppress their emotions.
‘Subjects were recorded watching 16 emotion-eliciting film clips while asked to attempt to suppress their facial expressions,’ Tomas explains.
‘They were told that experimenters are watching their face and that if their facial expression leaks and the experimenter guesses the clip they are watching correctly, they will be asked to fill in a dull 500-question survey. This induced 77 micro-expressions in 6 (now 21) subjects.’
To overcome the problem of the limited number of frames the researchers used a temporal interpolation method where each micro-expression is interpolated - essentially ‘gaps’ in the data are filled in with existing data - across a larger number of frames. This makes it possible to detect micro-expressions even with a standard camera.
Early results from the work are promising, with the automated method able to detect micro-expressions better than a human, Tomas comments:
‘The human detection accuracies reported in literature are significantly lower than our 79% accuracy. We are currently running human micro-expression recognition experiments on our data to get a directly comparable human accuracy.’
But the writing may not be on the wall for liars and con-artists just yet.
Automated recognition of micro-expressions is one thing, Tomas says, but detecting deception, and uncovering the truth, is considerably harder:
Micro-expressions should be treated only as clues that a person is hiding something, not as conclusive evidence for deception. They cannot indicate what that person is hiding or why they are attempting to conceal it.
Tomas adds: ‘That said, our initial experiments do indicate that our approach can distinguish deceptive from truthful micro-expressions, but we will need to conduct further experiments to confirm this.’
Top image: An example of a facial micro-expression (top-left) being interpolated through graph embedding (top-right); the result from which spatiotemporal local texture descriptors are extracted (bottom-right), enabling recognition with multiple kernel learning.
Bottom image left: The lower figure shows a temporal cross-section during the 6 frames long facial micro-expression depicted in the upper figure. The cross-section is positioned at a given x-coordinate on the upper lip of the subject.
Bottom image right: An illustration of the temporal interpolation method: the video is mapped onto a curve along which a new video is sampled.