Tagged with: “emotion detection”
Written by Todd Feathers on Motherboard.
“Very soon, Cerence announced, it plans to deepen that data mining operation with in-cabin cameras linked to emotion-detecting AI—algorithms that monitor minute changes in facial expression in order to determine a person’s emotional state at any given time.
But safety is only one attraction of in-cabin monitoring. The systems also hold huge potential for harvesting the kind of behavioral data that Google, Facebook, and other surveillance capitalists have exploited to target ads and influence purchasing habits.
Eyeris CEO Modar Alaoui likewise told Motherboard that while his company’s technology is primarily designed to improve safety, “we do foresee at some point that [automakers] will try to leverage the data for several use cases, whether it be for advertising or [determining] insurance” premiums.”
Written by Mark Purdy, John Zealley and Omaro Maseli on Harvard Business Review.
“Because of the subjective nature of emotions, emotional AI is especially prone to bias. For example, one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. Consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.
In short, if left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.”