Child with smartphone

Privacy concerns mount over Apple’s plan to scan iPhones for child sexual abuse images

Edward Snowden publicly criticised Apple’s decision, comparing the tech giant’s flagship devices to surveillance agents
Life
Image: Shutterstock via Dennis

6 August 2021

Apple’s decision to start scanning iPhone photo libraries for known images of child sexual abuse has raised concerns over user privacy.

The tech giant stated that the surveillance would only affect iPhones based in the US, with no current plans to roll out the feature in the EU.

Apple stated that it would scan photos on iPhones before they are uploaded to the iCloud storage services, comparing users’ images to evidence of child sexual abuse.

 

advertisement



 

In a statement on its blog, Apple said that it wants “to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM)”.

“​​This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). NCMEC acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States,” it added.

Alongside the scanning of iCloud uploads, Apple will also notify the parents of underage users if their child is exposed to “sensitive content”, such as explicit imagery sent over iMessage.

“These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey,” said the company, adding that the programme is “ambitious, and protecting children is an important responsibility”.

Apple also stated that “these efforts will evolve and expand over time”, yet it didn’t mention any plans of rolling out the features outside of the US.

However, the announcement has already raised concerns about users’ privacy and fears that the technology could be exploited by governments to increase surveillance.

Whistleblower and former National Security Agency (NSA) employee Edward Snowden publicly criticised Apple’s decision, comparing the tech giant’s flagship devices to surveillance agents:

“No matter how well-intentioned, Apple is rolling out mass surveillance to the entire world with this. Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow. They turned a trillion dollars of devices into iNarcs – without asking,” he stated on Twitter.

Adam Leon Smith, chair of BCS, the Chartered Institute for IT’s Software Testing group, also had his doubts about the use of the technology:

“On the surface this seems like a good idea, it maintains privacy whilst detecting exploitation. Unfortunately, it is impossible to build a system like this that only works for child abuse images. It is easy to envisage Apple being forced to use the same technology to detect political memes or text messages. Fundamentally this breaks the promise of end-to-end encryption, which is exactly what many governments want (except for their own messages of course),” he said, adding that this will only lead to criminals moving to different devices than Apple’s.

Paul Bischoff, privacy advocate at Comparitech told IT Pro that Apple’s announcement “doesn’t come as a surprise”, as the company had “hinted that it was scanning iCloud images for child abuse content some months ago”.

“Although there are privacy implications, I think this is an approach that balances individual privacy and child safety. The important thing is that this scanning technology is strictly limited in scope to protecting children and not used to scan users’ phones for other photos. If authorities are searching for someone who posted a specific photo on social media, for example, Apple could conceivably scan all iPhone users’ photos for that specific image,” he added.

However, Chris Hauk, consumer privacy champion at Pixel Privacy said that, while he supports efforts to tackle CSAM, he also has “privacy concerns about the use of the technology”.

“A machine learning system such as this could crank out false positives, leading to unwarranted issues for innocent citizens. Such technology could be abused if placed in government hands, leading to its use to detect images containing other types of content, such as photos taken at demonstrations and other types of gatherings. This could lead to the government clamping down on users’ freedom of expression and used to suppress “unapproved” opinions and activism,” he added.

In its announcement, Apple stated that the chances of its system “incorrectly flagging a given account” are one in a trillion.

Future Publishing

Read More:


Back to Top ↑

TechCentral.ie