Written by Tom Simonite on Wired.
A very long read but a fascinating insight into how ethics research works (more specifically, doesn’t) inside Google, which I imagine can be extrapolated to other corporations.
“Gebru’s career mirrored the rapid rise of AI fairness research, and also some of its paradoxes. Almost as soon as the field sprang up, it quickly attracted eager support from giants like Google, which sponsored conferences, handed out grants, and hired the domain’s most prominent experts. Now Gebru’s sudden ejection made her and others wonder if this research, in its domesticated form, had always been doomed to a short leash. To researchers, it sent a dangerous message: AI is largely unregulated and only getting more powerful and ubiquitous, and insiders who are forthright in studying its social harms do so at the risk of exile.
To some, the drama at Google suggested that researchers on corporate payrolls should be subject to different rules than those from institutions not seeking to profit from AI. In April, some founding editors of a new journal of AI ethics published a paper calling for industry researchers to disclose who vetted their work and how, and for whistle-blowing mechanisms to be set up inside corporate labs. ‘We had been trying to poke on this issue already, but when Timnit got fired it catapulted into a more mainstream conversation,’ says Savannah Thais, a researcher at Princeton on the journal’s board who contributed to the paper. ‘Now a lot more people are questioning: Is it possible to do good ethics research in a corporate AI setting?’
If that mindset takes hold, in-house ethical AI research may forever be held in suspicion—much the way industrial research on pollution is viewed by environmental scientists.
Inioluwa Deborah Raji, whom Gebru escorted to Black in AI in 2017, and who now works as a fellow at the Mozilla Foundation, says that Google’s treatment of its own researchers demands a permanent shift in perceptions. ‘There was this hope that some level of self-regulation could have happened at these tech companies,’ Raji says. ‘Everyone’s now aware that the true accountability needs to come from the outside—if you’re on the inside, there’s a limit to how much you can protect people.’
[Gebru]’s been thinking back to conversations she’d had with a friend who warned her not to join Google, saying it was harmful to women and impossible to change. Gebru had disagreed, claiming she could nudge things, just a little, toward a more beneficial path. ‘I kept on arguing with her,’ Gebru says. Now, she says, she concedes the point.