Tag: algorithms
-
How Algorithmic Bias Hurts People With Disabilities
Written by Alexanda Reeve Givens on Slate.
“In hiring, for example, new algorithm-driven tools will identify characteristics shared by a company’s “successful” existing employees, then look for those traits when they evaluate new hires. But as the model treats underrepresented traits as undesired traits to receive less weighting, people with disabilities—like other marginalized groups—risk being excluded as a matter of course.
…
While some have called to fix this data problem by collecting more detailed information about job candidates’ disabilities, further collection raises its own distinct and very real concerns about privacy and discrimination.
These problems exist for others, too: people who have marginalized sexual orientations or nonbinary gender identities, those who fall outside U.S. definitions of race and ethnicity, and for people who are members of multiple, intersecting marginalized communities.”
Read ‘How Algorithmic Bias Hurts People With Disabilities’ on the Slate site.
Tagged with: disability, discrimination, algorithms.
-
Systemic Algorithmic Harms
Written by Kinjal Dave on Data & Society Points.
“Because both ‘stereotype’ and ‘bias’ are theories of individual perception, our discussions do not adequately prioritize naming and locating the systemic harms of the technologies we build. When we stop overusing the word ‘bias,’ we can begin to use language that has been designed to theorize at the level of structural oppression, both in terms of identifying the scope of the harm and who experiences it.”
Read ‘Systemic Algorithmic Harms’ on the Data & Society Points site.
Tagged with: algorithms, oppression, systemic harm.
-
Technology Can't Fix Algorithmic Injustice
Written by Annette Zimmermann, Elena Di Rosa, Hochan Kim on Boston Review.
“Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.
What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.
…
There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.”
Read ‘Technology Can't Fix Algorithmic Injustice’ on the Boston Review site.
Tagged with: algorithms, artificial intelligence, discrimination.
-
These Black Women Are Fighting For Justice In A World Of Biased Algorithms
Written by Sherrell Dorsey on Essence.
“By rooting out bias in technology, these Black women engineers, professors and government experts are on the front lines of the civil rights movement of our time.”
Tagged with: algorithms, discrimination, facial recognition.
-
Trading privacy for survival is another tax on the poor
Written by Ciara Byrne on Fast Company.
“Personal data is used to deny low-income people access to resources or opportunities, but it’s also used to target them with predatory marketing for payday loans or even straight-up scams.”
“ Undocumented immigrants, day laborers, homeless people, and those with criminal convictions suffer from another data extreme: living beyond the reach of the data collection systems needed to thrive in society, they gain so much “privacy” that they become increasingly invisible. Living in this surveillance gap can be as damaging as living under constant surveillance, and is often a reaction to it.”
Read ‘Trading privacy for survival is another tax on the poor’ on the Fast Company site.
Tagged with: systemic discrimination, privacy, algorithms.
-
Algorithms alone can’t meaningfully hold other algorithms accountable
Written by Frank Pasquale on Real Life.
“The debate over the terms and goals of accountability must not stop at questions like “Is the data processing fairer if its error rate is the same for all races and genders?” We must consider broader questions, such as whether these tools should be developed and deployed at all.”
“The dispute over how to reform or restrict algorithms is rooted in a conflict over to whom algorithmic processes should be accountable. If it’s to a community of engineers and technocrats, then accountability will usually mean more comprehensive data collection to produce less biased algorithms. If it is accountability to the public at large, there are broader issues to consider, such as what limits should be placed on these tools’ use and commercialization, if they should even be developed at all.”
It’s all too quotable.
Frank Pasquale also recommends reading Safiya Umoja Noble and Virginia Eubanks:
“Scholars like Noble and Eubanks need to be at the center of future conversations about algorithmic accountability. They have exposed deep problems at the core of the political economy of information, in data-driven social control. They diversify the forms of expertise and authority that should be recognized in the development of better socio-technical systems. And they are not afraid to question the goals — and not simply the methods — of powerful firms and governments, foregrounding the question of to whom algorithmic systems are accountable.”
Read ‘Algorithms alone can’t meaningfully hold other algorithms accountable’ on the Real Life site.
Tagged with: algorithms, systemic discrimination, ethics.