Apple Announced Plans To Scan Its User Devices For Illegal Content

0
88

Apple Announced Plans To Scan Its User Devices For Illegal Contentsexual illusion

One of the biggest and most powerful companies on earth might also become one of the Internet’s biggest sheriffs. Apple announced recently that it is working on plans briefed for software on iPhones that would scan for illegal content. Namely, for the abuse of children, but the implications are much bigger. Up until now big tech tried to collect data but was trying to avoid reporting its users. This might change fundamentally and might change the internet forever.

A report by the Financial Times made a lot of headlines over the last few days raising concerns of the implications for how the internet works and what users have to expect from their devices. Will they become an active listening device constantly judging the lawfulness of their owners’ behavior? Orwell’s »Big Brother« doesn’t seem far away if Apple’s plan to scan its devices for illegal content will become a reality.

The new plan by Apple to surveil its users runs under the project name »neuralMatch«. Apparently, the software will scan the Apple device of its owner and would »proactively alert a team of human reviewers if it believes illegal imagery is detected, who would then contact law enforcement if the material can be verified.«

While the software was developed to combat child abuse, the basic algorithm could be trained to scan for other contents. As Apple is a global company and governments might have all kinds of reasons to declare content illegal, Apple’s new place poses a lot of questions and might become a threat for users living under authoritarian regimes.

According to the Financial Times, several researchers and scientists remarked that »although the system is currently trained to spot child sex abuse, it could be adapted to scan for any other targeted imagery and text, for instance, terror beheadings or anti-government signs at protests, say researchers. Apple’s precedent could also increase pressure on other tech companies to use similar techniques.«

A security specialist from the Johns Hopkins University, Mathew Green, even went further and called it a catastrophic idea. »This will break the dam – governments will demand it from everyone.«

The Financial Times article sees many implications and problems with the software. It said: »Apple’s neuralMatch algorithm will continuously scan photos that are stored on a U.S. user’s iPhone and have also been uploaded to its iCloud back-up system. Users’ photos, converted into a string of numbers through a process known as ‘hashing,’ will be compared with those on a database of known images of child sexual abuse.«

It will all come down to how the algorithm will be trained and programmed and especially who decides how to program it. Right now, the Financial Times claims, that the software »haas been trained on 200,000 sex abuse images collected by the U.S. non-profit National Center for Missing and Exploited Children.«

It added: »According to people briefed on the plans, every photo uploaded to iCloud in the U.S. will be given a ‘safety voucher’ saying whether it is suspect or not. Once a certain number of photos are marked as suspect, Apple will enable all the suspect photos to be decrypted and, if apparently illegal, passed on to the relevant authorities.«

Next to the question what other contents could be flagged as illegal and how the software might be used in the future, there is also the problem of false faggings by the algorithm as the software is of course not perfect. What would it mean if the device manufacturer Apple would basically get people under investigation by the police and it all turns out to be a false positive? Thousands of lives could be ruined and many more endangered by the basic principle of the software and Apple’s willingness to spy on its clients.

LEAVE A REPLY

Please enter your comment!
Please enter your name here