Laura Kalbag

Technology Can't Fix Algorithmic Injustice

Written by Annette Zimmermann, Elena Di Rosa, Hochan Kim on Boston Review.

“Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.

What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.

There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.”

Read ‘Technology Can't Fix Algorithmic Injustice’ on the Boston Review site.

Tagged with: algorithms, artificial intelligence, discrimination.