Block on Trump's Asylum Ban Upheld by Supreme Court
Is Facebook finally deciding to take responsibility for its actions? Or, maybe more precisely, its inactions?
Facing withering criticism for a hands-off policy that allowed fake political posts to sway the 2016 general election and provided a mechanism for groups to incite violence in developing countries, Facebook says it's getting the message.
Sort of, anyway.
And that's why we will soon see a Facebook Oversight Board.
In September 2019, Facebook's founder and CEO Mark Zuckerberg said the company was taking steps internally to identify and keep out fake and dangerously inflammatory content. He also announced that an oversight board (first mentioned by him a year earlier) would soon exist to function, supposedly, somewhat like an independent court.
The board, which will have 40 members, is in the process of forming. But once it's up and running — supposedly by summer 2020 — what, exactly, will it do? And what kind of influence might it have on Facebook's activities?
In January, Facebook announced a set of proposed bylaws, the passage of which presumably will be one of the board's first items of business. Among other things, the bylaws spell out what the relationship between Facebook and the Oversight Board would be.
Facebook says that cases heard by the board will initially involve posts that Facebook has taken down. Those cases can be referred to the board either by people who disagree with Facebook's decision or by Facebook itself.
The proposed bylaws state that Facebook will implement the board's content decisions within seven days. If the board makes additional policy recommendations of the type that could shape broad future policies, Facebook says the entire procedure — from the beginning of the case to Facebook's final action on it — could take 90 days.
In other words, it doesn't sound like this is going to stop the spread of viral misinformation.
One of the big issues looming over all this, of course, is whether Facebook might end up feeling a responsibility to fact-check political ads.
Zuckerberg has maintained that Facebook is a tech platform, not a publisher. While the company has fact-checkers who watch for dangerous or damaging information in other forms, its policy toward posts (paid or upaid) from politicians is to keep hands off.
Speaking at Georgetown University in October 2019, Zuckerberg said he had thought about banning political ads —which constitute a minuscule portion of Facebook's revenue, he said — but rejected it.
"(W)hile I certainly worry about an erosion of truth, I don't think most people want to live in a world where you can only post things that tech companies judged to be 100% true," he said. "Banning political ads favors incumbents and whoever the media chooses to cover."
As the 2020 general election heats up, it will be interesting to see what Facebook and its Oversight Board will do.
One of the known aspects of the board's anticipated launch this summer is that it will initially only consider cases where individuals believe their content was removed in error. This means that it won't be looking at posts that were left up in error.
At Lawfare, Evelyn Douek focuses on this seemingly odd fact: "It is like introducing video appeals to tennis to make calls more accurate but allowing players a review only when balls are called 'out' and not when a ball is called 'in,' no matter how erroneous the call seems. … (I)t is a disappointing limitation and represents an uncharacteristically incremental approach from a company famous for 'moving fast.'"
When you consider that Facebook has some 2.5 billion active users worldwide, the questions arise: How can you police something this big? How much of an impact can the Oversight Board have?
"There's no way to know how effective the board will be until it's up and running," Mariel Soto Reyes wrote recently in Business Insider, "and even at its best it's still just addressing a small piece of the content moderation puzzle."
Still, it seems like a step in the right direction.