Laura’s Lens
A reading list of articles and other links I use to inform my work at Small Technology Foundation, aiming for every weekday. Continued from the Ind.ie Radar, and Ind.ie’s Weekly Roundups. Subscribe to the Laura’s Lens RSS feed.
-
You Are Now Remotely Controlled
Written by Shoshana Zuboff on New York Times.
“In Wonderland, we celebrated the new digital services as free, but now we see that the surveillance capitalists behind those services regard us as the free commodity. We thought that we search Google, but now we understand that Google searches us. We assumed that we use social media to connect, but we learned that connection is how social media uses us. We barely questioned why our new TV or mattress had a privacy policy, but we’ve begun to understand that “privacy” policies are actually surveillance policies.
…
All of these delusions rest on the most treacherous hallucination of them all: the belief that privacy is private. We have imagined that we can choose our degree of privacy with an individual calculation in which a bit of personal information is traded for valued services — a reasonable quid pro quo.
…
The lesson is that privacy is public — it is a collective good that is logically and morally inseparable from the values of human autonomy and self-determination upon which privacy depends and without which a democratic society is unimaginable.”
Read ‘You Are Now Remotely Controlled’ on the New York Times site.
Tagged with: surveillance capitalism, privacy, inequality.
-
The Secretive Company That Might End Privacy as We Know It
Written by Kashmir Hill on New York Times.
“His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
…
The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.”
Read ‘The Secretive Company That Might End Privacy as We Know It’ on the New York Times site.
Tagged with: Clearview AI, facial recognition, privacy.
-
Google Nest or Amazon Ring? Just reject these corporations' surveillance and a dystopic future
Written by Evan Greer on NBC News Think.
“Fight for the Future is joining other consumer privacy and civil liberties experts and issuing an official product warning encouraging people to not buy Amazon Ring cameras because of the clear threat that they pose to all of our privacy, safety, and security.
For too long, we’ve been sold a false choice between privacy and security. It’s more clear every day that more surveillance does not mean more safety, especially for the most vulnerable. Talk to your family and friends and encourage them to do their research before putting any private company’s surveillance devices on your door or in your home. In the end, companies like Amazon and Google don’t care about keeping our communities safe; they care about making money.”
Tagged with: Ring, Google Nest, corporate surveillance.
-
Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests
Written by Natasha Lomas on Techcrunch.
“If the Court agrees with the [Advocate general]’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”
Tagged with: mass surveillance, government surveillance, privacy.
-
Systemic Algorithmic Harms
Written by Kinjal Dave on Data & Society Points.
“Because both ‘stereotype’ and ‘bias’ are theories of individual perception, our discussions do not adequately prioritize naming and locating the systemic harms of the technologies we build. When we stop overusing the word ‘bias,’ we can begin to use language that has been designed to theorize at the level of structural oppression, both in terms of identifying the scope of the harm and who experiences it.”
Read ‘Systemic Algorithmic Harms’ on the Data & Society Points site.
Tagged with: algorithms, oppression, systemic harm.
-
Grindr Shares Location, Sexual Orientation Data, Study Shows
Written by Sarah Syed , Natalia Drozdiak , and Nate Lanxon on Bloomberg.
“Grindr is sharing detailed personal data with thousands of advertising partners, allowing them to receive information about users’ location, age, gender and sexual orientation…” … “‘Every time you open an app like Grindr, advertisement networks get your GPS location, device identifiers and even the fact that you use a gay dating app,’ said Austrian privacy activist Max Schrems.”
Read ‘Grindr Shares Location, Sexual Orientation Data, Study Shows’ on the Bloomberg site.
Tagged with: Grindr, surveillance capitalism, privacy.
-
Technology Can't Fix Algorithmic Injustice
Written by Annette Zimmermann, Elena Di Rosa, Hochan Kim on Boston Review.
“Some contend that strong AI may be only decades away, but this focus obscures the reality that “weak” (or “narrow”) AI is already reshaping existing social and political institutions. Algorithmic decision making and decision support systems are currently being deployed in many high-stakes domains, from criminal justice, law enforcement, and employment decisions to credit scoring, school assignment mechanisms, health care, and public benefits eligibility assessments. Never mind the far-off specter of doomsday; AI is already here, working behind the scenes of many of our social systems.
What responsibilities and obligations do we bear for AI’s social consequences in the present—not just in the distant future? To answer this question, we must resist the learned helplessness that has come to see AI development as inevitable. Instead, we should recognize that developing and deploying weak AI involves making consequential choices—choices that demand greater democratic oversight not just from AI developers and designers, but from all members of society.
…
There may be some machine learning systems that should not be deployed in the first place, no matter how much we can optimize them.”
Read ‘Technology Can't Fix Algorithmic Injustice’ on the Boston Review site.
Tagged with: algorithms, artificial intelligence, discrimination.
-
How “Good Intent” Undermines Diversity and Inclusion
Written by Annalee on The Bias.
“‘Assume good intent’ is a particularly pernicious positive expectation that will undermine your code of conduct. The implied inverse of this is that not assuming good intent is against the rules.
…
The harm is that telling people to “assume good intent” is a sign that if they come to you with a concern, you will minimize their feelings, police their reactions, and question their perceptions. It tells marginalized people that you don’t see codes of conduct as tools to address systemic discrimination, but as tools to manage personal conflicts without taking power differences into account. Telling people to “assume good intent” sends a message about whose feelings you plan to center when an issue arises in your community.
…
If you want to build a culture of ‘assuming good intent,’ start by assuming good intent in marginalized people.”
Read ‘How “Good Intent” Undermines Diversity and Inclusion’ on the The Bias site.
Tagged with: inclusion, intent, discrimination.
-
Google’s Acquisition of Fitbit Has Implications for Health and Fitness Data
Written by Nicole Lindsey on CPO Magazine.
“Even if the Silicon Valley tech giant doesn’t plan to use that health and fitness data to show you ads, you can rest assured that Google has plenty of other uses for that data.”
-
Why Are You Publicly Sharing Your Child’s DNA Information?
Written by Nila Bala on The New York Times.
The problem with these tests is twofold. First, parents are testing their children in ways that could have serious implications as they grow older — and they are not old enough to consent. Second, by sharing their children’s genetic information on public websites, parents are forever exposing their personal health data.
…
Dr. Louanne Hudgins, a geneticist at Stanford, cautions parents to consider the long-term privacy of their child’s health information collected through home genetic kits. Their children’s DNA and other health data, she has warned, could be sold to other companies — marketing firms, data brokers, insurance companies — in the same way that social media sites and search engines collect and share data about their users.
…
The sharing of DNA results on open-source genealogy databases to find long-lost relatives poses another privacy risk: When parents share their children’s DNA on these sites, they are effectively sharing it with the world, including with the government and law enforcement investigators.
Read ‘Why Are You Publicly Sharing Your Child’s DNA Information?’ on the The New York Times site.