A reading list of articles and other links I use to inform my work at Small Technology Foundation, posted every weekday. Continued from the Ind.ie Radar, and Ind.ie’s Weekly Roundups. Subscribe to the Laura’s Lens RSS feed.
I’m a trans woman. Google Photos doesn’t know how to categorize me
Written by Cara Esten Hustle on Fast Company.
“The same data set that could be used to build a system to prevent showing trans folks photos from before they started transition could be trivially used and weaponized by an authoritarian state to identify trans people from street cameras,” [Penelope] Phippen says.
With this dystopian future in mind, coupled with the fact that federal agencies like ICE already use facial recognition technology for immigration enforcement, do we even want machine learning to piece together a coherent identity from both pre- and post-transition images?
With trans people facing daily harassment simply for existing as ourselves, the stakes seem too high to risk teaching these systems how to recognize us”
This made me think of Tatiana Mac’s brilliant ‘The Banal Binary’ talk at New Adventures conference two weeks ago.
Leaked Documents Expose the Secretive Market for Your Web Browsing Data
Written by Joseph Cox on Motherboard.
“The data obtained by Motherboard and PCMag includes Google searches, lookups of locations and GPS coordinates on Google Maps, people visiting companies’ LinkedIn pages, particular YouTube videos, and people visiting porn websites. It is possible to determine from the collected data what date and time the anonymized user visited YouPorn and PornHub, and in some cases what search term they entered into the porn site and which specific video they watched.”
LK: I read all claims of “anonymised”/“can’t be de-anonymised” with skepticism.
Tinder's New Panic Button Is Sharing Your Data With Ad-Tech Companies
Written by Shoshana Wodinsky on Gizmodo.
““The kinds of people that are gonna be coerced into downloading [the safety app] are exactly the kind of people that are put most at risk by the data that they’re sharing…”
You Are Now Remotely Controlled
Written by Shoshana Zuboff on New York Times.
All of these delusions rest on the most treacherous hallucination of them all: the belief that privacy is private. We have imagined that we can choose our degree of privacy with an individual calculation in which a bit of personal information is traded for valued services — a reasonable quid pro quo.
The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.”
The Secretive Company That Might End Privacy as We Know It
Written by Kashmir Hill on New York Times.
“His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.”
Google Nest or Amazon Ring? Just reject these corporations' surveillance and a dystopic future
Written by Evan Greer on NBC News Think.
“Fight for the Future is joining other consumer privacy and civil liberties experts and issuing an official product warning encouraging people to not buy Amazon Ring cameras because of the clear threat that they pose to all of our privacy, safety, and security.
For too long, we’ve been sold a false choice between privacy and security. It’s more clear every day that more surveillance does not mean more safety, especially for the most vulnerable. Talk to your family and friends and encourage them to do their research before putting any private company’s surveillance devices on your door or in your home. In the end, companies like Amazon and Google don’t care about keeping our communities safe; they care about making money.”
Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests
Written by Natasha Lomas on Techcrunch.
“If the Court agrees with the [Advocate general]’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”
Systemic Algorithmic Harms
Written by Kinjal Dave on Data & Society Points.
“Because both ‘stereotype’ and ‘bias’ are theories of individual perception, our discussions do not adequately prioritize naming and locating the systemic harms of the technologies we build. When we stop overusing the word ‘bias,’ we can begin to use language that has been designed to theorize at the level of structural oppression, both in terms of identifying the scope of the harm and who experiences it.”
Grindr Shares Location, Sexual Orientation Data, Study Shows
Written by Sarah Syed , Natalia Drozdiak , and Nate Lanxon on Bloomberg.
“Grindr is sharing detailed personal data with thousands of advertising partners, allowing them to receive information about users’ location, age, gender and sexual orientation…” … “‘Every time you open an app like Grindr, advertisement networks get your GPS location, device identifiers and even the fact that you use a gay dating app,’ said Austrian privacy activist Max Schrems.”
Technology Can't Fix Algorithmic Injustice
Written by Annette Zimmermann, Elena Di Rosa, Hochan Kim on Boston Review.
“Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.
What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.
There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.”