Tagged with: “business models”
Written by Chris Gilliard on OneZero.
“Even a cursory look at Facebook’s “mistakes,” as they refer to them (or “Facebook’s business model” as it is known to most everyone outside of the company), includes redlining users, enabling age discrimination in hiring, offering “Jew haters” as an advertising category, promoting the “boogaloo” movement, fueling genocide in Myanmar, and aiding Duterte’s rise in the Philippines. It’s not so much that the problem of hate on Facebook is new, so much as that each new revelation is met mostly with an apology and a “promise” to do better moving forward. Facebook has been apologizing and promising this way since at least 2007. Yet the “mistakes” continue.”
“A company whose business model necessitates that it consistently discharge poison into the environment should be dismantled.”
Written by Yaël Eisenstat on The Washington Post.
“[T]rue transparency would include information about the tools that differentiate advertising on Facebook from traditional print and television, and in fact make it more dangerous: Can I see if a political advertiser used the custom audience tool, and if so, if my email address was uploaded? Can I see what look-alike audience advertisers are seeking? Can I see a true, verified name of the advertiser in the disclaimer? Can I see if and how your algorithms amplified the ad? If not, the claim that Facebook is simply providing a level playing field for free expression is a myth.
Free political speech is core to our democratic principles, and it’s true that social media companies should not be the arbiters of truth. But the only way Facebook or other companies that use our behavioral data to potentially manipulate us through targeted advertising can prevent abuse of their platform to harm our electoral process is to end their most egregious targeting and amplification practices and provide real transparency.
We need lawmakers and regulators to help protect our children, our cognitive capabilities, our public square and our democracy by creating guardrails and rules to deal directly with the incentives and business models of these platforms and the societal harms they are causing.”
Millions of people uploaded photos to the Ever app. Then the company used them to develop facial recognition tools.
Written by Olivia Solon and Cyrus Farivar on NBC News.
“Ever AI promises prospective military clients that it can ‘enhance surveillance capabilities’ and ‘identify and act on threats.’ It offers law enforcement the ability to identify faces in body-cam recordings or live video feeds.”