Apple Delays Scanning iPhones For Child Abuse
Plans for software capable of detecting child abuse images to be built into iPhones have been temporarily suspended over privacy concerns. Apple has been developing a system which would automatically recognise illegal images when they are uploaded to iCloud and alert the authorities. The system was built to look for images that match those from libraries assembled by law enforcement to find and track the dissemination of child abuse material on the Internet.
It follows widespread criticism from privacy groups and others, worried that the on-device tracking set a dangerous precedent. There were concerns the system could be abused by authoritarian states. Apple said that it had listened to the negative feedback and was reconsidering.
The so-called NeuralHash technology would have scanned images just before they are uploaded to iCloud Photos. Then it would have matched them against known child sexual abuse material on a database maintained by the National Centre for Missing and Exploited Children.
If a match was found then it would have been manually reviewed by a human and, if required, steps taken to disable a user's account and report it to law enforcement.
In a statement, Apple said: "Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of child sexual abuse material. "Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features." Apple has previously been an exponent of privacy and end-to-end encryption.
As well as the CSAM scanning, Apple announced and has now paused a second set of updates, which would have seen it using an AI system to identify explicit images sent and received by users under 18 through the company’s Messages app and, where those users were under 13 and had their phones managed by family members, warn a parent or guardian.
Matthew Green, a cryptography researcher at Johns Hopkins University who had criticised the plan, told the AP news agency that he supported the delay. "You need to build support before you launch something like this,'' Green said. "This was a big escalation from scanning almost nothing to scanning private files.'' Green had been among the experts last month who warned that the NeuralHash scanning system could be used for nefarious purposes. For example, innocent people could be framed after having been sent seemingly innocuous images designed to trigger matches for child pornography. Green said it would be enough to fool the system and alert law enforcement.
Privacy campaigners expressed concern that the technology could be expanded and used by authoritarian governments to spy on citizens.
The Electronic Frontiers Foundation has been one of the most vocal critics of the system, gathering a petition signed by 25,000 customers opposing the move. Its executive director Cindy Cohn told the BBC: "The company must go further than just listening and drop its plans to put a backdoor into its encryption entirely... The enormous coalition that has spoken out will continue to demand that user phones - both their messages and their photos - be protected, and that the company maintains its promise to provide real privacy to its users."
NBC: Metro: CNet: DW: Guardian: BBC:
You Might Also Read: