Apple has said it will implement a system that checks photos on iPhones in the United States for matches with known images of child sexual abuse before they are uploaded to its iCloud storage services.
If enough child abuse image uploads are detected, Apple will initiate a human review of and report the user to law enforcement officials, the company said.
Apple said the system is designed to reduce false positives to one in one trillion.
With the new system, Apple is trying to address two imperatives: Requests from law enforcement to help stem child sexual abuse, and the privacy and security practices that the company has made a core tenet of its brand.
Other companies such as Facebook use similar technology to detect and report child sexual abuse.
Here is how Apple's system works. Law enforcement officials maintain a database of known child sexual abuse images and translate those images into "hashes" - numerical codes that positively identify the image but cannot be used to reconstruct them.
Apple has made its own implementation of that database using a technology called "NeuralHash" that is designed to also catch edited, but similar, to the original images. That database will be stored on iPhones.
When a user uploads an image to Apple's iCloud storage service, the iPhone will create a hash of the image to be uploaded and compare it against the database. Photos stored only on the phone are not checked, Apple said.
The Financial Times earlier reported some aspects of the program.
One key aspect of the system is that Apple checks photos stored on phones before they are uploaded, rather than checking the photos after they arrive on the company's servers.
On Twitter, some privacy and security experts expressed concerns that the system could eventually be expanded to scan phones more generally for prohibited content or political speech.
"Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content," Matthew Green, a security researcher at Johns Hopkins University, wrote in response to the earlier reporters.
"Whether they turn out to be right or wrong on that point hardly matters. This will break the dam - governments will demand it from everyone."
Other privacy researchers such as India McKinney and Erica Portnoy of the Electronic Frontier Foundation wrote in a blog post that it may be impossible for outside researchers to double check whether Apple keeps its promises to check only a small set of on-device content.
The move is "a shocking about-face for users who have relied on the company’s leadership in privacy and security," the pair wrote.
"At the end of the day, even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor," McKinney and Portnoy wrote.