10.4 C
London
Monday, March 27, 2023

Apple’s child-abuse scanner has serious flaw, researchers say

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

Researchers have found a flaw in iOS’s built-in hash function, raising new concerns about the integrity of Apple’s CSAM-scanning system. The flaw affects the hashing system, called NeuralHash, which allows Apple to check for exact matches of known child-abuse imagery without possessing any of the images or gleaning any information about non-matching pictures.

On Tuesday, a GitHub user called Asuhariet Ygvar posted code for a reconstructed Python version of NeuralHash, which he claimed to have reverse-engineered from previous versions of iOS. The GitHub post also includes instructions on how to extract the NeuralMatch files from a current macOS or iOS build.

“Early tests show that it can tolerate image resizing and compression, but not cropping or rotations,” Ygvar wrote on Reddit, sharing the new code. “Hope this will help us understand NeuralHash algorithm better and know its potential issues before it’s enabled on all iOS devices.”

Once the code was public, more significant flaws were quickly discovered. A user called Cory Cornelius produced a collision in the algorithm: two images that generate the same hash. If the findings hold up, it will be a significant failure in the cryptography underlying Apple’s new system.

On August 5th, Apple introduced a new system for stopping child-abuse imagery on iOS devices. Under the new system, iOS will check locally stored files against hashes of child abuse imagery, as generated and maintained by the National Center for Missing and Exploited Children (NCMEC). The system contains numerous privacy safeguards, limiting scans to iCloud photos and setting a threshold of as many as 30 matches found before an alert is generated. Still, privacy advocates remain concerned about the implications of scanning local storage for illegal material, and the new finding has heightened concerns about how the system could be exploited.

While the collision is significant, it would require extraordinary efforts to exploit it in practice. Generally, collision attacks allow researchers to find identical inputs that produce the same hash. In Apple’s system, this would mean generating an image that sets off the CSAM alerts even though it is not a CSAM image, since it produces the same hash as an image in the database. But actually generating that alert would require access to the NCMEC hash database, generating more than 30 colliding images, and then smuggling all of them onto the target’s phone. Even then, it would only generate an alert to Apple and NCMEC, which would easily identify the images as false positives.

Still, it’s an embarrassing flaw that will only elevate criticism of the new reporting system. A proof-of-concept collision is often disastrous for crytographic hashes, as in the case of the SHA-1 collision in 2017, although perceptual hashes like NeuralHash are known to be more collision-prone. As a result, it’s unclear whether Apple will replace the algorithm in response to the finding, or make more measured changes to mitigate potential attacks. Apple did not immediately respond to a request for comment.

More broadly, the finding will likely heighten calls for Apple to abandon its plans for on-device scans, which have continued to escalate in the weeks following the announcement. On Tuesday, the Electronic Frontier Foundation launched a petition calling on Apple to drop the system, under the title “Tell Apple: Don’t Scan Our Phones.” As of press time, it has garnered more than 1,700 signatures.

Source

- Advertisement -spot_imgspot_img
Latest news
- Advertisement -spot_img
Related news
- Advertisement -spot_img