Tag: ethics
-
What Really Happened When Google Ousted Timnit Gebru
Written by Tom Simonite on Wired.
A very long read but a fascinating insight into how ethics research works (more specifically, doesn’t) inside Google, which I imagine can be extrapolated to other corporations.
“Gebru’s career mirrored the rapid rise of AI fairness research, and also some of its paradoxes. Almost as soon as the field sprang up, it quickly attracted eager support from giants like Google, which sponsored conferences, handed out grants, and hired the domain’s most prominent experts. Now Gebru’s sudden ejection made her and others wonder if this research, in its domesticated form, had always been doomed to a short leash. To researchers, it sent a dangerous message: AI is largely unregulated and only getting more powerful and ubiquitous, and insiders who are forthright in studying its social harms do so at the risk of exile.
…
To some, the drama at Google suggested that researchers on corporate payrolls should be subject to different rules than those from institutions not seeking to profit from AI. In April, some founding editors of a new journal of AI ethics published a paper calling for industry researchers to disclose who vetted their work and how, and for whistle-blowing mechanisms to be set up inside corporate labs. ‘We had been trying to poke on this issue already, but when Timnit got fired it catapulted into a more mainstream conversation,’ says Savannah Thais, a researcher at Princeton on the journal’s board who contributed to the paper. ‘Now a lot more people are questioning: Is it possible to do good ethics research in a corporate AI setting?’
If that mindset takes hold, in-house ethical AI research may forever be held in suspicion—much the way industrial research on pollution is viewed by environmental scientists.
…
Inioluwa Deborah Raji, whom Gebru escorted to Black in AI in 2017, and who now works as a fellow at the Mozilla Foundation, says that Google’s treatment of its own researchers demands a permanent shift in perceptions. ‘There was this hope that some level of self-regulation could have happened at these tech companies,’ Raji says. ‘Everyone’s now aware that the true accountability needs to come from the outside—if you’re on the inside, there’s a limit to how much you can protect people.’
…
[Gebru]’s been thinking back to conversations she’d had with a friend who warned her not to join Google, saying it was harmful to women and impossible to change. Gebru had disagreed, claiming she could nudge things, just a little, toward a more beneficial path. ‘I kept on arguing with her,’ Gebru says. Now, she says, she concedes the point.
Read ‘What Really Happened When Google Ousted Timnit Gebru’ on the Wired site.
-
How Big Tech Manipulates Academia to Avoid Regulation
Written by Rodrigo Ochigame on The Intercept.
“There is now an enormous amount of work under the rubric of “AI ethics.” To be fair, some of the research is useful and nuanced, especially in the humanities and social sciences. But the majority of well-funded work on “ethical AI” is aligned with the tech lobby’s agenda: to voluntarily or moderately adjust, rather than legally restrict, the deployment of controversial technologies.
…
No defensible claim to “ethics” can sidestep the urgency of legally enforceable restrictions to the deployment of technologies of mass surveillance and systemic violence.”
Read ‘How Big Tech Manipulates Academia to Avoid Regulation’ on the The Intercept site.
Tagged with: ethics, artificial intelligence, regulation.
-
The biggest lie tech people tell themselves — and the rest of us
Written by Rose Eveleth on Vox.
“[T]he assertion that technology companies can’t possibly be shaped or restrained with the public’s interest in mind is to argue that they are fundamentally different from any other industry. They’re not.”
…
“There’s a growing chasm between how everyday users feel about the technology around them and how companies decide what to make. And yet, these companies say they have our best interests in mind. We can’t go back, they say. We can’t stop the “natural evolution of technology.” But the “natural evolution of technology” was never a thing to begin with, and it’s time to question what “progress” actually means.”
Read ‘The biggest lie tech people tell themselves — and the rest of us’ on the Vox site.
Tagged with: facial recognition, ethics, progress.
-
This One Weird Trick Tells Us Everything About You: In Print!
I almost forgot to share photos of the gorgeous printed Smashing Magazine! Getting all this into 2000 words was a challenge, but I’m happy with the result.
Read more…
-
Smashing TV Livestream: Towards Ethics & Privacy By Default
Tomorrow afternoon I’ll be on a panel with the other folks who contributed to the new Smashing print magazine on Ethics and Privacy. You can watch it live on YouTube (be aware that Google is tracking you!) at 1pm GMT / 2pm Irish time / 3pm Barcelona time.
Read more…
-
This One Weird Trick Tells Us Everything About You
I wrote a little essay for Smashing Print #1: Ethics & Privacy titled ‘This One Weird Trick Tells Us Everything About You.’
Read more…
-
Elastic Brand podcast
Last Friday afternoon, I had a lovely chat with Liz Elcoate about ethics, inclusivity and accessibility in design and branding. That chat is now available as Episode 4 of Liz’s podcast, The Elastic Brand.
Read more…
-
Algorithms alone can’t meaningfully hold other algorithms accountable
Written by Frank Pasquale on Real Life.
“The debate over the terms and goals of accountability must not stop at questions like “Is the data processing fairer if its error rate is the same for all races and genders?” We must consider broader questions, such as whether these tools should be developed and deployed at all.”
“The dispute over how to reform or restrict algorithms is rooted in a conflict over to whom algorithmic processes should be accountable. If it’s to a community of engineers and technocrats, then accountability will usually mean more comprehensive data collection to produce less biased algorithms. If it is accountability to the public at large, there are broader issues to consider, such as what limits should be placed on these tools’ use and commercialization, if they should even be developed at all.”
It’s all too quotable.
Frank Pasquale also recommends reading Safiya Umoja Noble and Virginia Eubanks:
“Scholars like Noble and Eubanks need to be at the center of future conversations about algorithmic accountability. They have exposed deep problems at the core of the political economy of information, in data-driven social control. They diversify the forms of expertise and authority that should be recognized in the development of better socio-technical systems. And they are not afraid to question the goals — and not simply the methods — of powerful firms and governments, foregrounding the question of to whom algorithmic systems are accountable.”
Read ‘Algorithms alone can’t meaningfully hold other algorithms accountable’ on the Real Life site.
Tagged with: algorithms, systemic discrimination, ethics.
-
Is there potential for “ethical analytics”?
We have Piwik analytics on the Ind.ie site, and I use Gauges and GoSquared on my own site (I was indecisive at the time…) But I use Ghostery so I actually block analytics like this for my own use.
Read more…
-
The Illusion Of Free on A List Apart
Last week, my column on The Illusion Of Free was published on A List Apart. I took a lot of time writing it, as I think it’s an incredibly important issue, and it reflects my thinking around Ind.
Read more…