Risks Of Bias In ‘Emotional AI’
Artificial intelligence (AI) has grown exponentially, from driverless vehicles to voice automation in households the practical applications of AI are rapidly expanding, leading to a growing interest in the potential biases of such algorithms.
AI facial analysis is increasingly used when people checking. For example, some organisations tell candidates to answer predefined questions in a recorded video and use facial recognition to analyse the potential applicant faces.
Furthermore, some companies are developing facial recognition software to scan the faces in crowds and assess threats, specifically mentioning doubt and anger as emotions that indicate threats.
A recent study shows evidence that facial recognition software interprets emotions differently based on the person’s race.
Lisa Feldman Barrett, Professor of Psychology at Northeastern University, said that such technologies appear to disregard a growing body of evidence undermining the notion that the basic facial expressions are universal across cultures. As a result, such technologies, some of which are already being deployed in real-world settings, run the risk of being unreliable or discriminatory.
Because of the subjective nature of emotions, emotional AI is especially prone to bias. For example, a study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others.
Today, AI is not sophisticated enough to understand cultural differences in expressing and reading emotions, making it harder to draw accurate conclusions. For instance, a smile might mean one thing in Germany and another in Japan. Confusing these meanings can lead businesses to make wrong decisions. Consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect employment prospects.
Conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.
The implications of algorithmic bias are a clear reminder that business and technology leaders must understand and prevent such biases from seeping in. For example, employees often think they’re in the right role, but upon trying new projects might find their skills are better aligned elsewhere. Some organisations are already allowing employees to try different roles once a month to see what jobs they like most and researchers claim that is where bias in AI could reinforce existing stereotypes.
In the US, where nearly 90% of civil engineers and over 80% of first-line police and detectives are male, an algorithm that has been conditioned to analyse male features might struggle to read emotional responses and engagement levels among female recruits. This could lead to flawed role allocation and training decisions.
The future development of AI facial recognition tools must deal with some key challenges:-
Creating products that adapt to consumer emotions: With emotion tracking, product developers can learn which features elicit the most excitement and engagement in users. As these systems become more commonplace, insurance companies are going to want a piece of the data. This could mean higher premiums for older people, as the data would suggest that, despite many prompts to rest, the driver pressed on.
Improving tools to measure customer satisfaction: Companies are giving businesses the tools to help their employees interact better with customers. Its algorithms can not only identify “compassion fatigue” in customer service agents, but can also guide agents on how to respond to callers via an app.
Transforming the learning experience: Emotional insights could be used to augment the learning experience across all ages. It could, for example, allow teachers to design lessons that spur maximum engagement, putting key information at engagement peaks and switching content at troughs. It also offers insights into the students themselves, helping to identify who needs more attention.
As more and more companies incorporate emotional AI in their operations and products, is imperative that they’re aware of the potential for bias to creep in and that they actively work to prevent it.
Emotional AI will be a very powerful tool, forcing businesses to reconsider their relationships with consumers and employees alike. It will not only offer new metrics to understand people, but will also redefine products as we know them. But as businesses expand into the world of emotional intelligence, the need to prevent biases from creeping in will be essential. Failure to act will leave certain groups systematically more misunderstood than ever, a far cry from the promises offered by emotional AI.
Harvard Business Review: Forbes: Guardian: SSRN: Business Day: GetImmersion:
You Might Also Read:
Facial Recognition Company Hacked: