A reading list of articles and other links I use to inform my work at Small Technology Foundation, aiming for every weekday. Continued from the Ind.ie Radar, and Ind.ie’s Weekly Roundups. Subscribe to the Laura’s Lens RSS feed.
Concern trolls and power grabs: Inside Big Tech’s angry, geeky, often petty war for your privacy
Written by Issie Lapowsky on Protocol.
”Snyder and others argue these new arrivals, who drape themselves in the flag of competition, are really just concern trolls, capitalizing on fears about Big Tech’s power to cement the position of existing privacy-invasive technologies.”
… ”If the privacy advocates inside the W3C have been put off by Rosewell’s approach, he hasn’t exactly been charmed by theirs either… From his perspective, browsers have too much power over the community, and they use that power to quash conversations that might make them look bad.”
A long read where everyone comes out looking bad. (And those portrayed as the ”defenders of privacy” aren’t necessarily doing so out of the goodness of their hearts either!)
How Long Until Citizen Gets Someone Killed?
Written by Lil Kalish on Mother Jones.
’Jim Thatcher, an urban studies professor at the University of Washington, Tacoma, is skeptical. “If you give people this power to draw attention based around a dangerous event,” Thatcher says, “then they are actually encouraged to seek out, or at worst manufacture, these dangerous events,” noting the speed with which Citizen escalated the manhunt. Like many others, Thatcher downloaded Citizen during last year’s protests. He wondered whether it might be used as a tool for “sousveillance,” a term surveillance activists use in reference to turning cameras back on authorities. That didn’t shake out. “Let’s be explicitly clear here,” he says. “The ‘frictionless’ solution provided by a for-profit company for a public health and safety issue is just…not good. There’s no outcome where this ends well.”
Hamid Khan, founder of the Stop LAPD Spying Coalition, an anti-police-surveillance group, says the manhunt wasn’t Citizen gone awry; it was Citizen working as designed. Khan sees the app as part of a “culture of deputization and vigilantism” built on the “see something, say something” ethos of neighborhood watch, now “taking a more technological sort of spin.” He says tools like Citizen, with their patina of officialdom and impartial reporting, “are becoming a license to racially profile and go after some of the most vulnerable community members—particularly the unhoused—and to criminalize them.”’
International coalition calls for action against surveillance-based advertising
Written by Finn Myrstad and Øyvind H. Kaldestad on Forbrukerrådet.
“Every day, consumers are exposed to extensive commercial surveillance online. This leads to manipulation, fraud, discrimination and privacy violations. Information about what we like, our purchases, mental and physical health, sexual orientation, location and political views are collected, combined and used under the guise of targeting advertising.
The collection and combination of information about us not only violates our right to privacy, but renders us vulnerable to manipulation, discrimination and fraud. This harms individuals and society as a whole, says the director of digital policy in the NCC, Finn Myrstad.”
Includes a detailed list of the consequences of surveillance-based advertising.
Perspectives on tackling Big Tech’s market power
Written by Natasha Lomas on Techcrunch.
“Slaughter also argued that it’s important for regulators not to pile all the burden of avoiding data abuses on consumers themselves.
‘I want to sound a note of caution around approaches that are centered around user control,’ she said. ’I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions… I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.
‘So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.’”
Social media thrives on shame – but how should we handle an offensive past coming to light?
Written by Stephanie Soh on gal-dem.
“Because one reason why there is such a visceral ‘gotcha’ response to these posts, is down to the continued dismissal of marginalised communities who want justice for the discrimination they face. Whether it’s Black people speaking out about police brutality, women who have suffered sexual assault being let down by the criminal justice system or trans people who are harassed not only in the street but in supposedly professional spaces, the problems marginalised communities face remain unaddressed. So when a trial-by-social-media does happen after old offensive posts surface, it can be a chance to see justice very publicly served, as well as prove that the discrimination you say you face, really does exist – because it’s right there to see, in a tweet.
But often, raging at individuals for long-past mistakes seems like misplaced energy. What we need to challenge are the people and institutions who continue to discriminate today, and who show no signs of changing.
If serious problems are found, then it’s only right that Ollie and Yorkshire CCC should be held accountable for them. In fact, tackling the systemic problems would not only help prevent the attitudes that manifest themselves as offensive posts, but move us away from a culture where we are so justice-starved that we demand to see heads roll.”
Google’s Quest to Kill the Cookie Is Creating a Privacy Shitshow
Written by Shoshana Wodinsky on Gizmodo.
“Digiday reported this week that some major players in the adtech industry have started drawing up plans to turn FLoC into something just as invasive as the cookies it’s supposed to quash. In some cases, this means companies amalgamating any data scraps they can get from Google with their own catalogs of user info, turning FLoC from an ”anonymous” identifier into just another piece of personal data for shady companies to compile. Others have begun pitching FLoC as a great tool for fingerprinting—an especially underhanded tracking technique that can keep pinpointing you no matter how many times you go incognito or flush your cache.
[W]hat if that guy regularly visits websites centered around queer or trans topics? What if he’s trying to get access to food stamps online? This kind of web browsing—just like all web browsing—gets slurped into FLoC’s algorithm, potentially tipping off countless obscure adtech operators about a person’s sexuality or financial situation. And because the world of data sharing is still a (mostly) lawless wasteland in spite of lawmaker’s best intentions, there’s not much stopping a DSP from passing off that data to the highest bidder.”
noyb aims to end “cookie banner terror” and issues more than 500 GDPR complaints
Written by noyb on noyb.
Missed this a couple of weeks ago, and it could make a huge difference to our browsing experiences (and compel sites to do better!)
Note: the misuse and overuse of “crazy” is unfortunate in the linked site. Self Defined dictionary recommends more appropriate, and less ableist, alternative words.
“Today, noyb.eu sent over 500 draft complaints to companies who use unlawful cookie banners - making it the largest wave of complaints since the GDPR came into force.
The GDPR was meant to ensure that users have full control over their data, but being online has become a frustrating experience for people all over Europe. Annoying cookie banners appear at every corner of the web, often making it extremely complicated to click anything but the “accept” button. Companies use so-called “dark patterns” to get more than 90% of users to “agree” when industry statistics show that only 3% of users actually want to agree.
Many internet users mistake this annoying situation as a direct outcome of the GDPR, when in fact companies misuse designs in violation of the law. The GDPR demands a simple “yes” or “no”, as reasonable people would expect, but companies often have the power over the design and narrative when implementing the GDPR.”
I Would Rather Die Than Let Facebook Monitor My Heart Rate
Written by Victoria Song on Gizmodo.
“I’m well aware that if you want your health data to remain private, smartwatches are certainly risky. But we’re way past that now. These devices can and have saved lives, and despite some early skepticism, wearables aren’t going anywhere. Why pick a smartwatch made by a company whose founder called early users ‘dumb fucks’ for trusting him? Why trust the company that had a full-page temper tantrum in several national newspapers because Apple introduced stronger privacy features? I’ve got two drawers bursting with smartwatches launched in 2020—there are plenty of lesser evils to choose from.”
What Really Happened When Google Ousted Timnit Gebru
Written by Tom Simonite on Wired.
A very long read but a fascinating insight into how ethics research works (more specifically, doesn’t) inside Google, which I imagine can be extrapolated to other corporations.
“Gebru’s career mirrored the rapid rise of AI fairness research, and also some of its paradoxes. Almost as soon as the field sprang up, it quickly attracted eager support from giants like Google, which sponsored conferences, handed out grants, and hired the domain’s most prominent experts. Now Gebru’s sudden ejection made her and others wonder if this research, in its domesticated form, had always been doomed to a short leash. To researchers, it sent a dangerous message: AI is largely unregulated and only getting more powerful and ubiquitous, and insiders who are forthright in studying its social harms do so at the risk of exile.
To some, the drama at Google suggested that researchers on corporate payrolls should be subject to different rules than those from institutions not seeking to profit from AI. In April, some founding editors of a new journal of AI ethics published a paper calling for industry researchers to disclose who vetted their work and how, and for whistle-blowing mechanisms to be set up inside corporate labs. ‘We had been trying to poke on this issue already, but when Timnit got fired it catapulted into a more mainstream conversation,’ says Savannah Thais, a researcher at Princeton on the journal’s board who contributed to the paper. ‘Now a lot more people are questioning: Is it possible to do good ethics research in a corporate AI setting?’
If that mindset takes hold, in-house ethical AI research may forever be held in suspicion—much the way industrial research on pollution is viewed by environmental scientists.
Inioluwa Deborah Raji, whom Gebru escorted to Black in AI in 2017, and who now works as a fellow at the Mozilla Foundation, says that Google’s treatment of its own researchers demands a permanent shift in perceptions. ‘There was this hope that some level of self-regulation could have happened at these tech companies,’ Raji says. ‘Everyone’s now aware that the true accountability needs to come from the outside—if you’re on the inside, there’s a limit to how much you can protect people.’
[Gebru]’s been thinking back to conversations she’d had with a friend who warned her not to join Google, saying it was harmful to women and impossible to change. Gebru had disagreed, claiming she could nudge things, just a little, toward a more beneficial path. ‘I kept on arguing with her,’ Gebru says. Now, she says, she concedes the point.
Huge Chunk of the Internet Goes Offline Thanks to One Company
Written by Matt Novak on Gizmodo.
“The outage will likely draw attention to how centralized our ‘decentralized’ internet really is—a depressing reminder as ransomware attacks hit at critical infrastructure around the world.”