Detecting More Complex Affective States

Our technology leadership and world-class scientific advisors are part of a larger network of Europe’s top universities and organisations. Through several grants received from the European Union and the European Commission, we’ve had the opportunity to work on a variety of projects that have strengthened our capabilities, our technology, and provided our team with extremely valuable experience.

 

 

 
 
Our current live projects include the SEWA project: a 3.6m grant to work on automatic sentiment analysis in the wild, funded by the EU's Horizon 2020 programme. The grant will help develop automated technology that will quantify the correlation between behaviours and emotions, focusing on behavioural indicators that were previously too subtle for emotion measurement – someone holding their head in their hand, for instance, twirling their hair or biting their nails, all important behavioural clues. 

By partnering with Imperial College and the University of Passau, the academic leaders in the fields of sentiment detection from computer vision and audio analysis, we aim to develop computers’ abilities to analyse and understand people’s emotions and behaviours using facial, vocal and verbal analysis. We want to develop the ability of standard webcams to automatically detect more complex behavioural and affective states than the six basic emotions  whether a person likes or dislikes what they’re seeing, for instance, or whether they’re bored. 
 


Maja Pantic, Professor of Affective Computing at Imperial and Realeyes Scientific Advisor, was interviewed by Charlie Rose on a CBS 60 Minutes report on Artificial Intelligence. As she explains, enabling machines to better understand people adds a human element to data-driven decision-making, ultimately ensuring that computers are better able to make the right decisions. Equally, emotion-enabled technology could change the way we interact with technology around us  be it our computers, our phones, the music we listen to or the videos we're watching. Eventually, computers will be better than we areat reading emotions  and able to detect nuances than humans might miss. 

An additional 5.3m grant has been awarded to the SpeechXRay project, which focuses on building the next generation audio-visual user recognition platform to be used for authentication purposes, pushing the boundaries of what technology can do for security, privacy and usability. We’ll be using our facial coding technology to detect when someone may be under duress, or when facial movements like chewing or drinking are getting in the way of authentication.

The applications of automated facial coding are many and varied and hardly limited to advertising – from health and education to gaming and security. We’re proud to be part of a strong academic community that is constantly working to push the boundaries of what technology can do to improve people’s lives.