Apple’s proposed protections against child predators — a challenge to privacy?

By Claude Barfield

Two weeks ago, with little prior consultation, Apple announced that in the fall it would create a system in its photo cloud to identify potential child predators using its platform for nefarious purposes. As now widely reported, the reactions were wildly variant — and vehement. Many law enforcement officials and child protection organizations, such as the National Center for Missing and Exploited Children (NCMEC), strongly applauded the move (calling it a “game changer”). Privacy defenders, such as the Electric Frontier Foundation and Center for Democracy and Technology, and Facebook’s WhatsApp and many cybersecurity experts condemned the action as a “back door” and slippery slope toward opening the floodgates to future privacy invasions by authoritarian governments. Earlier this week, my AEI colleague Jim Harper joined this chorus of privacy warnings.

via Twenty20

Within the confines of this blog space, here are my reflections on the surrounding and immediate issues. First, as background, despite pressure from governments and influential public interest groups, Apple has steadfastly maintained its commitment to end-to-end encryption for its iPhones and computers — even when former Attorney General William Barr singled the company out as being disloyal to the US in its Chinese business practices and aiding murderers in the notorious San Bernardino, California, and Pensacola, Florida, incidents. While caving on some important issues (e.g., removing FaceTime in Saudi Arabia and excising some apps from its Chinese App Store), Apple has held the line on encryption and against so-called back doors — at least, possibly, until now.

In its recent action, Apple seems to have felt that with pressure unrelenting and Congress threatening mandatory legislation, the better part of valor was to make a significant gesture against child predators. Thus, the company announced several moves two weeks ago, but for purposes here the most damaging (from privacy groups’ perspective) was the aforementioned decision to introduce software, in coming months, directly on Apple devices that would allow scanning of pictures uploaded to iCloud Photos. (For greater technical detail, see here and here.)

With the newly installed software, Apple will be able to
check iCloud pictures against known sexual predator material gathered by
organizations like the NCMEC. Apple will not actually view the pictures, but
will utilize a newly created algorithm that can recognize known child abuse
images via a digital fingerprint called a “hash.” The system will be triggered
only if 30 or more such images are discovered, and Apple has introduced a human
element to double-check results before passing them along to government
investigators. Apple maintains there is only a one-in-one-trillion chance of a
false matching.

Several points in reaction to all of this: First, Apple erred
(as it has since admitted) in abruptly announcing the potentially disruptive
technology without first establishing a path to consult experts, vitally
interested privacy groups, and computer scientists who have been working for a
common solution for some years. Further, as critics have noted, Apple stoutly
defends the safety and reliability of the new system, but as one computer
science critic pointed out, “Apple has allowed few if any independent
computer scientists to test its algorithm.” This leads to one suggestion,
whatever the final outcome of Apple’s decision: The company would be well
advised to delay implementation of the new software. That would allow both the
software and its corresponding criticisms — based not on policy but on
technological uncertainties — to be adequately assessed through outside testing
and evaluation.

Second, moving to the policy realm, we should be clear on
where matters stand. While Apple has introduced a new technological conundrum
regarding child security versus privacy, tech companies such as Facebook,
Google, and Microsoft have long surveyed users’ photos on their servers. The
new wrinkle in Apple’s proposed move is to insert the software on the devices
(iPhones, iPads, and computers) themselves. Critics are correct in predicting
that governments, both democratic and authoritarian, will inevitably pressure
Apple to tweak and expand the algorithm’s boundaries to surveil for other
purposes. Apple has adamantly denied that it would ever agree to any such
demands. But it must be noted that Apple’s new “fix” will greatly complicate
its fights with governments in the future.

Until now, Apple could argue that with the most advanced iPhones, the company itself could not break their code. But with the new software solution directly on the device, that defense is moot. Thus, returning to the question posed in the title, the answer must be “we do not know.” All hangs on Apple’s ability — and steadfastness — to uphold its signature privacy creed in the face of new obstacles it has created for itself.

The post Apple’s proposed protections against child predators — a challenge to privacy? appeared first on American Enterprise Institute – AEI.