Facebook, Twitter and what news is fit to share

Note: This was originally written for the daily newsletter at the Columbia Journalism Review, where I am the chief digital writer

In an unprecedented move Wednesday, both Facebook and Twitter took steps to limit the distribution of a news story from a mainstream publication, on the grounds that it was a) based on hacked emails and b) of questionable accuracy. Twitter actually prevented users from posting a link to the story, and in some cases prevented users from clicking on existing links to it as well, showing them a warning instead with a message saying the story violated the company’s terms of service. Facebook didn’t stop anyone from posting a link to the story, but reduced its reach by tweaking the News Feed algorithm so fewer users would see it.

The story was a New York Post report alleging that Hunter Biden introduced his father Joe to the head of a natural gas company in the Ukraine. The source? Emails allegedly retrieved from the younger Biden’s laptop by a computer repair shop and given to Trump attorney Rudy Giuliani. In Twitter’s case, the company argued that the story breached its policy against distribution of content obtained through hacking, and said documents included with the story also contained an individual’s name and identifying information, which is against privacy rules. Facebook, meanwhile, said its position against “hack and leak” operations required it to reduce the distribution of the story while it was being fact-checked by third-party partners.

These moves, not surprisingly, triggered an avalanche of accusations of censorship from conservatives. Sen. Josh Hawley went so far as to argue in a letter to the Federal Election Commission that removing the story was a benefit to Biden, and therefore amounted to a campaign finance violation, and said the Judiciary Committee will vote on whether to subpoena Twitter CEO Jack Dorsey to explain his actions. Others, including Sen. Ted Cruz, argued that Facebook and Twitter had breached the First Amendment. Rep. Doug Collins said that the blocks were “a grave threat to our democracy.” These arguments, of course, ignore the fact Facebook and Twitter are protected by the First Amendment, and also by Section 230 of the CDA, which allows them to make content-moderation decisions without penalty.

Many of these accusations are clearly being made in bad faith, and are a variation on the “platforms censor conservatives” canard that has been rattling around Congress for years without a shred of evidence. At the same time, however, it’s true that the decisions made by the two platforms are problematic in a number of ways. For one thing, Twitter’s policy not to allow users to post “content published without authorization” is extremely vague, and could theoretically block not just questionable stories from the New York Post, but valuable investigative stories based on leaked content, including the Pentagon Papers and virtually everything from WikiLeaks.

This highlights a broader problem with both platforms, and that is a lack of detail about their policies, and how and when they are likely to be implemented. Twitter CEO Jack Dorsey admitted that the company didn’t do a good job of explaining itself when it first blocked the Post story, but the followup wasn’t much help either. Facebook, meanwhile, has a habit of just pointing to the algorithm as though it is a magic wand that can erase any problem that comes up, and routinely promises things that never come to pass.

“There will be battles for control of the narrative again and again over the coming weeks,” Evelyn Douek, a lecturer at Harvard Law School, told the New York Times. “The way the platforms handled it is not a good harbinger of what’s to come.”

This is not only infuriating for those who would like some clarity on the decision-making at these platforms, but it makes it that much easier for bad faith actors like Hawley and Cruz et al. to argue that the companies are doing something unsavory or illegal, which leads to show-trial style hearings that often amount to a lot of sound and fury, signifying very little. If we are to trust these giant tech corporations to make decisions around what kind of journalism can be shared on their networks, we’re going to need a lot more transparency and a lot less hand waving.

Social sharing options
This entry was posted in Media by mathewi. Bookmark the permalink.

About mathewi

I'm the chief digital writer at the Columbia Journalism Review in New York, and a former writer for Fortune magazine and the Globe and Mail newspaper.

Leave a Reply

Your email address will not be published. Required fields are marked *