A reading list of articles and other links I use to inform my work at Small Technology Foundation, posted every weekday. Continued from the Ind.ie Radar, and Ind.ie’s Weekly Roundups. Subscribe to the Laura’s Lens RSS feed.
Written by Liz Pelly on The Baffler.
“[M]usic streaming platforms are in a unique position within the greater platform economy: they have troves of data related to our emotional states, moods, and feelings. It’s a matter of unprecedented access to our interior lives, which is buffered by the flimsy illusion of privacy.
Spotify’s enormous access to mood-based data is a pillar of its value to brands and advertisers, allowing them to target ads on Spotify by moods and emotions. Further, since 2016, Spotify has shared this mood data directly with the world’s biggest marketing and advertising firms.
“At Spotify we have a personal relationship with over 191 million people who show us their true colors with zero filter,” reads a current advertising deck. “That’s a lot of authentic engagement with our audience: billions of data points every day across devices! This data fuels Spotify’s streaming intelligence—our secret weapon that gives brands the edge to be relevant in real-time moments.”
In Spotify’s world, listening data has become the oil that fuels a monetizable metrics machine, pumping the numbers that lure advertisers to the platform. In a data-driven listening environment, the commodity is no longer music. The commodity is listening. The commodity is users and their moods. The commodity is listening habits as behavioral data. Indeed, what Spotify calls “streaming intelligence” should be understood as surveillance of its users to fuel its own growth and ability to sell mood-and-moment data to brands.
What’s in question here isn’t just how Spotify monitors and mines data on our listening in order to use their “audience segments” as a form of currency—but also how it then creates environments more suitable for advertisers through what it recommends, manipulating future listening on the platform.”
Written by LibrarianShipwreck on LibrarianShipwreck.
“And thus, in the guise of a seemingly innocuous tradeoff (in which the user thinks they’re really getting the benefit), the user accepts being subjected to high-tech corporate surveillance.
Importantly, this is one of the primary ways in which such surveillance gets normalized.
High-tech surveillance succeeds by slowly chipping away at the obstacles to its acceptance. It does not start with the total takeover, rather it begins on a smaller scale, presenting itself as harmless and enjoyable. As people steadily grow accustomed to this sort of surveillance, as they come to see themselves as its beneficiaries instead of as its victims, they become open to a little bit more surveillance, and a little bit more surveillance, and a little bit more. This is the steady wearing down of defenses, the slow transformation of corporate creepiness into cultural complacency, that allows rampant high-tech surveillance to progress.”
Written by Payal Arora on Quartz.
“We need to de-exoticize these users if we are going to genuinely have a healthy global digital culture. They need to be humanized, understood, and kept in mind when designing inclusive platforms. The internet is a critical public resource that is meant for all users—and that includes the world’s poor.”
Written by Jonnie Penn on The Economist.
“After the 2010 BP oil spill, for example, which killed 11 people and devastated the Gulf of Mexico, no one went to jail. The threat that Mr Runciman cautions against is that AI techniques, like playbooks for escaping corporate liability, will be used with impunity.
Today, pioneering researchers such as Julia Angwin, Virginia Eubanks and Cathy O’Neil reveal how various algorithmic systems calcify oppression, erode human dignity and undermine basic democratic mechanisms like accountability when engineered irresponsibly. Harm need not be deliberate; biased data-sets used to train predictive models also wreak havoc.
A central promise of AI is that it enables large-scale automated categorisation… This “promise” becomes a menace when directed at the complexities of everyday life. Careless labels can oppress and do harm when they assert false authority.”
Written by Emily Ackerman on City Lab.
“The advancement of robotics, AI, and other “futuristic” technologies has ushered in a new era in the ongoing struggle for representation of people with disabilities in large-scale decision-making settings.
We need to build a technological future that benefits disabled people without disadvantaging them along the way.
Accessible design should not depend on the ability of an able-bodied design team to understand someone else’s experience or foresee problems that they’ve never had. The burden of change should not rest on the user (or in my case, the bystander) and their ability to communicate their issues.
A solution that works for most at the expense of another is not enough.”
Written by Amnesty International/Kumi Naidoo on Amnesty International.
“Surveillance Giants lays out how the surveillance-based business model of Facebook and Google is inherently incompatible with the right to privacy and poses a systemic threat to a range of other rights including freedom of opinion and expression, freedom of thought, and the right to equality and non-discrimination.
The tech giants offer these services to billions without charging users a fee. Instead, individuals pay for the services with their intimate personal data, being constantly tracked across the web and in the physical world as well, for example, through connected devices.
The technology behind the internet is not incompatible with our rights, but the business model Facebook and Google have chosen is”
Written by Mark Purdy, John Zealley and Omaro Maseli on Harvard Business Review.
“Because of the subjective nature of emotions, emotional AI is especially prone to bias. For example, one study found that emotional analysis technology assigns more negative emotions to people of certain ethnicities than to others. Consider the ramifications in the workplace, where an algorithm consistently identifying an individual as exhibiting negative emotions might affect career progression.
In short, if left unaddressed, conscious or unconscious emotional bias can perpetuate stereotypes and assumptions at an unprecedented scale.”
Written by Sherrell Dorsey on Essence.
“By rooting out bias in technology, these Black women engineers, professors and government experts are on the front lines of the civil rights movement of our time.”
Written by Megan Wollerton on CNET.
“It’s a complete anomaly – a solidly performing, decently priced device that just isn’t suited for anyone because of the privacy concerns and increasingly alarming issues plaguing the social networking site.”
Written by Yaël Eisenstat on The Washington Post.
“[T]rue transparency would include information about the tools that differentiate advertising on Facebook from traditional print and television, and in fact make it more dangerous: Can I see if a political advertiser used the custom audience tool, and if so, if my email address was uploaded? Can I see what look-alike audience advertisers are seeking? Can I see a true, verified name of the advertiser in the disclaimer? Can I see if and how your algorithms amplified the ad? If not, the claim that Facebook is simply providing a level playing field for free expression is a myth.
Free political speech is core to our democratic principles, and it’s true that social media companies should not be the arbiters of truth. But the only way Facebook or other companies that use our behavioral data to potentially manipulate us through targeted advertising can prevent abuse of their platform to harm our electoral process is to end their most egregious targeting and amplification practices and provide real transparency.
We need lawmakers and regulators to help protect our children, our cognitive capabilities, our public square and our democracy by creating guardrails and rules to deal directly with the incentives and business models of these platforms and the societal harms they are causing.”