Note: This was originally published as the daily newsletter for the Columbia Journalism Review, where I am the chief digital writer
Earlier this month, Apple announced a series of steps it is taking to help keep children safe online. One of those new additions is a feature for its Alexa line of intelligent assistants that will automatically suggest a help-line number if someone asks for child-exploitation material, and another is a new feature that scans images shared through iMessage, to make sure children aren’t sharing unsafe pictures of themselves in a chat window. Neither of these new features sparked much controversy, since virtually everyone agrees that online sharing of child sexual-abuse material is a significant problem that needs to be solved, and that technology companies need to be part of that solution. The third plank in Apple’s new approach to dealing with this kind of content, however, triggered a huge backlash: rather than simply scanning photos that are uploaded to Apple’s servers in the cloud, the company said it will start scanning the photos that users have on their phones to see whether they match an international database of child-abuse content.
As Alex Stamos, former Facebook security chief, pointed out in an interview with Julia Angwin, founder and editor of The Markup, scanning uploaded photos to see if they include pre-identified examples of child sexual-abuse material has been going on for a decade or more, ever since companies like Google, Microsoft, and Facebook started offering cloud-based image storage. The process relies on a database of photos maintained by the National Center for Missing and Exploited Children, each of which comes with a unique cryptographic code known as a “hash.” Cloud companies compare the code to the images that are uploaded to their servers, and then flag and report the ones that match. Federal law doesn’t require companies to search for such images — and until now, Apple has not done so — but it does require them to report such content if they find it.
What Apple plans to do is to implement this process on a user’s phone, before anything is uploaded to the cloud. The company says this is a better way of cracking down on this kind of material, but its critics say it is not only a significant breach of privacy, but also opens a door to other potential invasions by the government, and other state actors, that can’t easily be closed. The Electronic Frontier Foundation called the new feature a “backdoor to your private life,” and Mallory Knobel, chief technology officer at the Center for Democracy and Technology, told me in an interview on CJR’s Galley discussion platform that this ability could easily be expanded to other forms of content “by Apple internal policy as well as US government policy, or any government orders around the world.” Although Apple often maintains that it cares more about user privacy than any other technology company, Knobel and other critics note that the company still gave the Chinese government virtually unlimited access to user data for citizens in that country.Continue reading