Laura’s Lens
A reading list of articles and other links I use to inform my work at Small Technology Foundation, aiming for every weekday. Continued from the Ind.ie Radar, and Ind.ie’s Weekly Roundups. Subscribe to the Laura’s Lens RSS feed.
-
Big Data and the Underground Railroad
Written by Alvaro M. Bedoya on Slate.
“Far too often, today’s discrimination was yesterday’s national security or public health necessity. An approach that advocates ubiquitous data collection and protects privacy solely through post-collection use restrictions doesn’t account for that.”
Read ‘Big Data and the Underground Railroad’ on the Slate site.
Tagged with: big data, discrimination, privacy.
-
How Big Tech Manipulates Academia to Avoid Regulation
Written by Rodrigo Ochigame on The Intercept.
“There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.
…
No defensible claim to “ethics” can sidestep the urgency of legally enforceable restrictions to the deployment of technologies of mass surveillance and systemic violence.”
Read ‘How Big Tech Manipulates Academia to Avoid Regulation’ on the The Intercept site.
Tagged with: ethics, artificial intelligence, regulation.
-
Big Mood Machine
Written by Liz Pelly on The Baffler.
“[M]usic streaming platforms are in a unique position within the greater platform economy: they have troves of data related to our emotional states, moods, and feelings. It’s a matter of unprecedented access to our interior lives, which is buffered by the flimsy illusion of privacy.
…
Spotify’s enormous access to mood-based data is a pillar of its value to brands and advertisers, allowing them to target ads on Spotify by moods and emotions. Further, since 2016, Spotify has shared this mood data directly with the world’s biggest marketing and advertising firms.
…
“At Spotify we have a personal relationship with over 191 million people who show us their true colors with zero filter,” reads a current advertising deck. “That’s a lot of authentic engagement with our audience: billions of data points every day across devices! This data fuels Spotify’s streaming intelligence—our secret weapon that gives brands the edge to be relevant in real-time moments.”
…
In Spotify’s world, listening data has become the oil that fuels a monetizable metrics machine, pumping the numbers that lure advertisers to the platform. In a data-driven listening environment, the commodity is no longer music. The commodity is listening. The commodity is users and their moods. The commodity is listening habits as behavioral data. Indeed, what Spotify calls “streaming intelligence” should be understood as surveillance of its users to fuel its own growth and ability to sell mood-and-moment data to brands.
…
What’s in question here isn’t just how Spotify monitors and mines data on our listening in order to use their “audience segments” as a form of currency—but also how it then creates environments more suitable for advertisers through what it recommends, manipulating future listening on the platform.”
Tagged with: Spotify, mood, surveillance capitalism.
-
Who Listens to the Listeners?
Written by LibrarianShipwreck on LibrarianShipwreck.
“And thus, in the guise of a seemingly innocuous tradeoff (in which the user thinks they’re really getting the benefit), the user accepts being subjected to high-tech corporate surveillance.
Importantly, this is one of the primary ways in which such surveillance gets normalized.
…
High-tech surveillance succeeds by slowly chipping away at the obstacles to its acceptance. It does not start with the total takeover, rather it begins on a smaller scale, presenting itself as harmless and enjoyable. As people steadily grow accustomed to this sort of surveillance, as they come to see themselves as its beneficiaries instead of as its victims, they become open to a little bit more surveillance, and a little bit more surveillance, and a little bit more. This is the steady wearing down of defenses, the slow transformation of corporate creepiness into cultural complacency, that allows rampant high-tech surveillance to progress.”
Read ‘Who Listens to the Listeners?’ on the LibrarianShipwreck site.
Tagged with: Spotify, surveillance capitalism, normalisation.
-
The biggest myths about the next billion internet users
Written by Payal Arora on Quartz.
“We need to de-exoticize these users if we are going to genuinely have a healthy global digital culture. They need to be humanized, understood, and kept in mind when designing inclusive platforms. The internet is a critical public resource that is meant for all users—and that includes the world’s poor.”
Read ‘The biggest myths about the next billion internet users’ on the Quartz site.
Tagged with: society, poverty, discrimination.
-
AI thinks like a corporation—and that’s worrying
Written by Jonnie Penn on The Economist.
“After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.
Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc.
…
A central promise of AI is that it enables large-scale automated categorisation… This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority.”
Read ‘AI thinks like a corporation—and that’s worrying’ on the The Economist site.
Tagged with: artificial intelligence, corporation, discrimination.
-
My Fight With a Sidewalk Robot
Written by Emily Ackerman on City Lab.
“The advancement of robotics, AI, and other “futuristic” technologies has ushered in a new era in the ongoing struggle for representation of people with disabilities in large-scale decision-making settings.
…
We need to build a technological future that benefits disabled people without disadvantaging them along the way.
…
Accessible design should not depend on the ability of an able-bodied design team to understand someone else’s experience or foresee problems that they’ve never had. The burden of change should not rest on the user (or in my case, the bystander) and their ability to communicate their issues.
…
A solution that works for most at the expense of another is not enough.”
Tagged with: accessibility, artificial intelligence, autonomous vehicles.
-
Facebook and Google’s pervasive surveillance poses an unprecedented danger to human rights
Written by Amnesty International/Kumi Naidoo on Amnesty International.
“Surveillance Giants lays out how the surveillance-based business model of Facebook and Google is inherently incompatible with the right to privacy and poses a systemic threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.
…
The tech giants offer these services to billions without charging users a fee. Instead, individuals pay for the services with their intimate personal data, being constantly tracked across the web and in the physical world as well, for example, through connected devices.
…
The technology behind the internet is not incompatible with our rights, but the business model Facebook and Google have chosen is”
Tagged with: Facebook, Google, human rights.
-
The Risks of Using AI to Interpret Human Emotions
Written by Mark Purdy, John Zealley and Omaro Maseli on Harvard Business Review.
“Because of the subjective nature of emotions, emotional AI is especially prone to bias. For example, one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. Consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.
…
In short, if left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.”
Read ‘The Risks of Using AI to Interpret Human Emotions’ on the Harvard Business Review site.
Tagged with: artificial intelligence, emotion detection, bias.
-
These Black Women Are Fighting For Justice In A World Of Biased Algorithms
Written by Sherrell Dorsey on Essence.
“By rooting out bias in technology, these Black women engineers, professors and government experts are on the front lines of the civil rights movement of our time.”
Tagged with: algorithms, discrimination, facial recognition.