Tag: artificial intelligence
-
Car Companies Want to Monitor Your Every Move With Emotion-Detecting AI
Written by Todd Feathers on Motherboard.
“Very soon, Cerence announced, it plans to deepen that data mining operation with in-cabin cameras linked to emotion-detecting AI—algorithms that monitor minute changes in facial expression in order to determine a person’s emotional state at any given time.
…
But safety is only one attraction of in-cabin monitoring. The systems also hold huge potential for harvesting the kind of behavioral data that Google, Facebook, and other surveillance capitalists have exploited to target ads and influence purchasing habits.
…
Eyeris CEO Modar Alaoui likewise told Motherboard that while his company’s technology is primarily designed to improve safety, “we do foresee at some point that [automakers] will try to leverage the data for several use cases, whether it be for advertising or [determining] insurance” premiums.”
Tagged with: surveillance capitalism, emotion detection, artificial intelligence.
-
Technology Can't Fix Algorithmic Injustice
Written by Annette Zimmermann, Elena Di Rosa, Hochan Kim on Boston Review.
“Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.
What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.
…
There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.”
Read ‘Technology Can't Fix Algorithmic Injustice’ on the Boston Review site.
Tagged with: algorithms, artificial intelligence, discrimination.
-
How Big Tech Manipulates Academia to Avoid Regulation
Written by Rodrigo Ochigame on The Intercept.
“There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.
…
No defensible claim to “ethics” can sidestep the urgency of legally enforceable restrictions to the deployment of technologies of mass surveillance and systemic violence.”
Read ‘How Big Tech Manipulates Academia to Avoid Regulation’ on the The Intercept site.
Tagged with: ethics, artificial intelligence, regulation.
-
AI thinks like a corporation—and that’s worrying
Written by Jonnie Penn on The Economist.
“After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.
Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc.
…
A central promise of AI is that it enables large-scale automated categorisation… This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority.”
Read ‘AI thinks like a corporation—and that’s worrying’ on the The Economist site.
Tagged with: artificial intelligence, corporation, discrimination.
-
My Fight With a Sidewalk Robot
Written by Emily Ackerman on City Lab.
“The advancement of robotics, AI, and other “futuristic” technologies has ushered in a new era in the ongoing struggle for representation of people with disabilities in large-scale decision-making settings.
…
We need to build a technological future that benefits disabled people without disadvantaging them along the way.
…
Accessible design should not depend on the ability of an able-bodied design team to understand someone else’s experience or foresee problems that they’ve never had. The burden of change should not rest on the user (or in my case, the bystander) and their ability to communicate their issues.
…
A solution that works for most at the expense of another is not enough.”
Tagged with: accessibility, artificial intelligence, autonomous vehicles.
-
The Risks of Using AI to Interpret Human Emotions
Written by Mark Purdy, John Zealley and Omaro Maseli on Harvard Business Review.
“Because of the subjective nature of emotions, emotional AI is especially prone to bias. For example, one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. Consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.
…
In short, if left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.”
Read ‘The Risks of Using AI to Interpret Human Emotions’ on the Harvard Business Review site.
Tagged with: artificial intelligence, emotion detection, bias.