Apple explains how iPhones scan photos for images of child sexual abuse


Close-up shot of female finger scrolling on smartphone screen in dark environment.

Shortly after today’s reports that Apple will begin scanning iPhones for images of child abuse, the company confirmed its plan and provided details in a press release and technical summary.

“Apple’s method of detecting known CSAM (child sexual abuse material) is designed with user privacy in mind,” said Apple. ad saying. “Rather than scanning images to the cloud, the system performs an on-device comparison using a database of known CSAM image hashes provided by NCMEC (National Center for Missing and Exploited Children) and other child safety organizations. Apple It further transforms this database into an unreadable hash set file that is safely stored on users’ devices. “

Apple provided more details on CSAM’s detection system in a technical summary and said his system uses a threshold “set to provide an extremely high level of accuracy and ensures less than one in a trillion chances per year of mis-marking a given account.”

The changes will be rolled out “later this year in updates for iOS 15, iPadOS 15, watchOS 8 and macOS Monterey,” Apple said. Apple will also implement software that can analyze images in the Messages application for a new system that will “warn children and their parents when they receive or send sexually explicit photos.”

Apple accused of building “surveillance infrastructure”

Despite Apple’s assurances, security experts and privacy advocates criticized the plan.

“Apple is replacing its industry-standard end-to-end encrypted messaging system with a surveillance and censorship infrastructure, which will be vulnerable to abuse and within reach not just in the US, but around the world.” saying Greg Nojeim, co-director of the Surveillance and Security Project at the Center for Democracy and Technology. “Apple should abandon these changes and restore the faith of its users in the security and integrity of their data on Apple devices and services.”

For years, Apple has resisted pressure from the US government to install a “back door” on its encryption systems, saying that doing so would undermine the security of all users. Apple has been praised by security experts for this stance. But with its plan to roll out software that performs device scans and share selected results with authorities, Apple is dangerously close to acting as a tool for government surveillance, suggested Johns Hopkins University professor of cryptography Matthew Green in Twitter

The client-side scanning that Apple announced today could eventually “be a key ingredient in adding surveillance to encrypted messaging systems,” he said. wrote. “The ability to add scanning systems like this to E2E [end-to-end encrypted] Messaging systems have been one of the main questions for law enforcement agencies around the world. ”

Siri message scanning and “intervention”

In addition to scanning devices for images that match the CSAM database, Apple said it will update the Messages app to “add new tools to warn children and their parents when they receive or send sexually explicit photos.”

“Messages uses machine learning on the device to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so Apple does not have access to messages,” Apple said.

When an image is flagged in Messages, “the photo will be blurred and the child will be warned, presented with helpful resources, and reassured that it is okay if they do not want to see this photo.” The system will allow parents to receive a message if children see a flagged photo, and “similar protections are available if a child tries to send sexually explicit photos. The child will be warned before the photo is sent and parents can receive a message. “. if the child chooses to send it, “Apple said.

Apple said it will also update Siri and Search to “provide extended information and assistance to parents and children if they are in unsafe situations.” The Siri and Search systems will “step in when users search for CSAM-related queries” and “explain to users that interest in this topic is harmful and problematic, and provide partner resources to help with this issue.”

The Center for Democracy and Technology called the scanning of photographs in Messages a “back door,” writing:

The mechanism that will allow Apple to scan images into Messages is not an alternative to a back door, it is a back door. Client-side scanning at one “end” of the communication breaks the security of the transmission and informing a third party (the parent) about the content of the communication undermines your privacy. Organizations around the world They have warned against client-side scanning because it could be used as a way for governments and businesses to control the content of private communications.

Apple technology to analyze images

Apple’s whitepaper on CSAM detection includes some privacy promises in the introduction. “Apple doesn’t learn anything about images that don’t match the known CSAM database,” he says. “Apple cannot access metadata or visual derivatives for matching CSAM images until a matching threshold is exceeded for an iCloud Photos account.”

Apple’s hash technology is called NeuralHash and “parses an image and converts it to a unique number specific to that image. Only another image that looks nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value, “Apple wrote.

Before an iPhone or other Apple device uploads an image to iCloud, the device creates a cryptographic security token that encrypts the result of the match. It also encrypts the NeuralHash of the image and a visual derivative. This voucher is uploaded to iCloud Photos along with the image. ”

Using the “secret sharing threshold,” Apple’s system ensures that the content of the security vouchers cannot be interpreted by Apple unless the iCloud Photos account crosses a known CSAM content threshold, “the document says.” Only when the threshold is exceeded does the cryptographic technology allow Apple to interpret the content of the security tickets associated with the matching CSAM images. “

While noting the 1 in 1 trillion probability of a false positive, Apple said it also “manually reviews all reports made to NCMEC to ensure the accuracy of the reports.” Users can “file an appeal to have their account reinstated” if they believe their account was marked in error.

User devices to store the blinded CSAM database

User devices will store a “blinded database” that allows the device to determine when a photo matches an image in the CSAM database, Apple explained:

First, Apple receives the NeuralHashes for a known CSAM from the aforementioned child safety organizations. These NeuralHashes then go through a series of transformations including a final blinding step, driven by elliptic curve cryptography. Blinding is done using a server-side blinding secret, known only to Apple. Blind CSAM hashes are put into a hash table, where the position in the hash table is purely a function of the NeuralHash of the CSAM image. This blinded database is securely stored on users’ devices. The properties of elliptic curve cryptography ensure that no device can infer anything about the hashes of the underlying CSAM image from the blinded database.

An iPhone or other device will analyze users’ photos, calculate a NeuralHash, and look for “the entry in the blinded hash table.” The device “also uses the blind hash that the system looked for to obtain a derived encryption key” and uses that encryption key “to encrypt the associated payload data.”

Combined with other steps this ensures that only images that match the CSAM database will be decrypted, Apple wrote:

If the hash of the user’s image matches the entry in the known CSAM hash list, then the NeuralHash of the user’s image is exactly transformed to the blinded hash if it went through the series of transformations performed at the time of the configuration of the database. Based on this property, the server will be able to use the cryptographic header (derived from NeuralHash) and, using the server-side secret, it will be able to calculate the derived encryption key and successfully decrypt the associated payload data.

If the user’s image does not match, the previous step will not lead to the correct derived encryption key and the server will not be able to decrypt the associated payload data. Therefore, the server does not learn anything about mismatched images.

The device does not learn about the result of the match because that requires knowledge of the server-side blinding secrecy.

Finally, the client uploads the image to the server along with the receipt containing the encrypted payload data and the cryptographic header.

As noted above, you can read the technical summary here. Apple also published a longer and detailed explanation of “private set intersection” cryptographic technology that determines whether a photo matches the CSAM database without revealing the result.




arstechnica.com

Leave a Reply

Your email address will not be published. Required fields are marked *