Apple’s plan to scan images on users’ phones sparks backlash

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Earlier this month, Apple announced a series of steps it is taking to help keep children safe online. One of those new additions is a feature for its Alexa line of intelligent assistants that will automatically suggest a help-line number if someone asks for child-exploitation material, and another is a new feature that scans images shared through iMessage, to make sure children aren’t sharing unsafe pictures of themselves in a chat window. Neither of these new features sparked much controversy, since virtually everyone agrees that online sharing of child sexual-abuse material is a significant problem that needs to be solved, and that technology companies need to be part of that solution. The third plank in Apple’s new approach to dealing with this kind of content, however, triggered a huge backlash: rather than simply scanning photos that are uploaded to Apple’s servers in the cloud, the company said it will start scanning the photos that users have on their phones to see whether they match an international database of child-abuse content.

As Alex Stamos, former Facebook security chief, pointed out in an interview with Julia Angwin, founder and editor of The Markup, scanning uploaded photos to see if they include pre-identified examples of child sexual-abuse material has been going on for a decade or more, ever since companies like Google, Microsoft, and Facebook started offering cloud-based image storage. The process relies on a database of photos maintained by the National Center for Missing and Exploited Children, each of which comes with a unique cryptographic code known as a “hash.” Cloud companies compare the code to the images that are uploaded to their servers, and then flag and report the ones that match. Federal law doesn’t require companies to search for such images — and until now, Apple has not done so — but it does require them to report such content if they find it.

What Apple plans to do is to implement this process on a user’s phone, before anything is uploaded to the cloud. The company says this is a better way of cracking down on this kind of material, but its critics say it is not only a significant breach of privacy, but also opens a door to other potential invasions by the government, and other state actors, that can’t easily be closed. The Electronic Frontier Foundation called the new feature a “backdoor to your private life,” and Mallory Knobel, chief technology officer at the Center for Democracy and Technology, told me in an interview on CJR’s Galley discussion platform that this ability could easily be expanded to other forms of content “by Apple internal policy as well as US government policy, or any government orders around the world.” Although Apple often maintains that it cares more about user privacy than any other technology company, Knobel and other critics note that the company still gave the Chinese government virtually unlimited access to user data for citizens in that country.

Continue reading

Facebook’s excuses for shutting down research ring hollow

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last week, Facebook shut down the personal accounts of several researchers affiliated with New York University, claiming that their work—including a browser extension called Ad Observer, which allows users to share the ads that they are shown in their Facebook news feeds—violated the social network’s privacy policies. The company said that while it wants to help social scientists with their work, it can’t allow user information to be shared with third parties, in part because of the consent decree it signed with the Federal Trade Commission as part of a $5 billion settlement in the Camridge Analytica case in 2018. Researchers, including some of those who were involved in the NYU project, said Facebook’s behavior was not surprising, given the company’s long history of dragging its feet when it comes to sharing information. And not long after Facebook used the FTC consent decree as a justification for the shutdown, the federal agency took the unusual step of making public a letter it sent to Mark Zuckerberg, Facebook’s CEO, stating that if the company had contacted the FTC about the research, “we would have pointed out that the consent decree does not bar Facebook from creating exceptions for good-faith research in the public interest.”

To discuss how Facebook responded in this case, its track record when it comes to social-science research, and the way that other platforms such as Twitter treat researchers, CJR brought together a number of experts using our Galley discussion platform. The group included Laura Edelson, a doctoral candidate in computer science at NYU and one of the senior scientists on the Ad Observatory team; Jonathan Mayer, a professor at Princeton and former chief technologist with the Federal Communication Commission; Julia Angwin, founder and editor-in-chief of The Markup, a data-driven investigative reporting startup that has a similar ad research tool called Citizen Browser; Neil Chilson, a fellow at the Charles Koch Institute and former chief technologist at the Federal Trade Commission; Nathalie Marechal of Ranking Digital Rights; and Rebekah Tromble, a doctoral candidate and director of the Institute for Data, Democracy & Politics at George Washington University.

Edelson has said the drastic action Facebook took against her and the rest of the team was the culmination of a series of escalating threats about the group’s research (they are currently lobbying the company to get their accounts reinstated), but that she also has good relationships with some people at the social network. “Facebook’s behavior toward our group has been… complicated,” she said. Since the group studies the safety and efficacy of Facebook’s systems around political ads and misinformation, Edelson said “there is always going to be an inherent tension there,” but that there are several people she has worked with at Facebook who are “smart and dedicated.” One thing that makes the company’s behavior somewhat confusing is that the user information Facebook says it is trying to protect is the names of advertisers in its political ad program, which are publicly available through its own Ad Library. “Those are, technically speaking, Facebook user names,” Edelson says. “We think they are public, and Facebook is saying they are not.”

Continue reading

Facebook shuts down research, blames user privacy rules

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

Last October, Facebook warned a group of social scientists from New York University that their research — known as the Ad Observatory, part of the Cybersecurity for Democracy Project — was in breach of the social network’s terms of service, because it used software to “scrape” information from Facebook without the consent of the service’s users. The company said that unless the researchers stopped using the browser extension they developed, or changed the way that it acquired information, they would be subject to “additional enforcement action.” Late Tuesday night, Facebook followed through on this threat by blocking the group from accessing any of the platform’s data, and also shutting down the researchers’ personal accounts and pages. In a blog post, the company said it was forced to do so because the browser extension violated users’ privacy. “While the Ad Observatory project may be well-intentioned, the ongoing and continued violations of protections against scraping cannot be ignored,” Facebook said.

The NYU researchers responded that they have taken all the precautions they can to avoid pulling in personally identifiable information from users — including names, user ID numbers, and Facebook friend lists — and also pointed out that the thousands of users who signed up to help the Ad Observatory Project installed the group’s browser extension willingly, to help the scientists research the impact of the social network’s ad-targeting algorithms. “Facebook is silencing us because our work often calls attention to problems on its platform,” Laura Edelson, one of the NYU researchers, told Bloomberg News in an email. “Worst of all, Facebook is using user privacy, a core belief that we have always put first in our work, as a pretext for doing this.” Edelson also said on Twitter that the Facebook shutdown has effectively cut off more than two dozen other researchers and journalists who got access to Facebook advertising data through the NYU project

Unauthorized access to private user data is a sensitive topic for Facebook. In the Cambridge Analytica scandal of 2018, a political consulting firm acquired personally identifiable information on more than 80 million people from a researcher who gained access to it through a seemingly harmless Facebook app. The resulting furor eventually led to a $5 billion settlement with the Federal Trade Commission for breaches of privacy, and the company promised it would never share the personal information of its users with third parties without stringent controls. The ripple effects of the FTC order — combined with the subsequent passing of the European Union’s General Data Protection Regulation or GDPR — led to severe restrictions on the social network’s API (application programming interface), which other web services and software use to exchange data with the social network. And many of those restrictions also affected researchers like those at NYU.

Continue reading

Section 230 critics are forgetting about the First Amendment

Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer

A recurring theme in political circles is the idea that giant digital platforms such as Facebook, Twitter, and YouTube engage in bad behavior—distributing disinformation, allowing hate speech, removing conservative opinions, and so on—in part because they are protected from legal liability by Section 230 of the Communications Decency Act, which says they aren’t responsible for content posted by their users. Critics on both sides of the political aisle argue that this protection either needs to be removed or significantly amended because the social networks are abusing it. Former president Donald Trump signed an executive order in an attempt to get the FTC to do something about Section 230, although his efforts went nowhere, and Section 230 also plays a role in his recent lawsuits against Facebook, Google, and Twitter for banning him. President Joe Biden hasn’t pushed anyone to do anything specific yet, but he has said that the clause should be “revoked immediately.”

One of the most recent attempts to change Section 230 comes from Democratic Senator Amy Klobuchar, who has proposed a bill that would carve out an exception for medical misinformation during a health crisis, making the platforms legally liable for distributing anything the government defines as untrue. While this may seem like a worthwhile goal, given the kind of rampant disinformation being spread about vaccines on platforms like Facebook and Google’s YouTube, some freedom of speech advocates argue that even well-intentioned laws like Klobuchar’s could backfire badly and have dangerous consequences. Similar concerns have been raised about a suite of proposed bills introduced by a group of Republican members of Congress, which involve a host of “carve-outs” for Section 230 aimed at preventing platforms from removing certain kinds of content (mostly conservative speech), and forcing them to remove other kinds (cyber-bullying, doxxing, etc.).

To talk about these and related issues, we’ve been interviewing a series of experts in law and technology using CJR’s Galley discussion platform, including Makena Kelly, a policy reporter for The Verge covering topics like net neutrality, data privacy, antitrust, and internet culture; Jeff Kosseff, an assistant professor of cybersecurity law at the United States Naval Academy, and author of “The Twenty-Six Words That Created the Internet, a history of Section 230;Mike Masnick, who runs technology analysis site Techdirt and co-founded a think tank called the Copia Institute; Mary Anne Franks, professor of law at the University of Miami, and president of the Cyber Civil Rights Initiative; James Grimmelmann, a law professor at Cornell Tech; and Eric Goldman, a professor of law at Santa Clara University.

Continue reading