
Privacy campaigners are in uproar over a new technology developed in Australia by a former model which is able to match facial recognition images with social media posts. Some are describing the development as “the end of privacy as we know it”.
According to a report in the New York Times, the new technology has been developed by Hoan Ton-That, an Australian citizen and former male model. The company he has used to develop the new app is called Clearview AI.
What Clearview AI does and why it is problematic
His new app allows a person to take a photo of someone, upload it, and then get a series of results of publicly available images that match the person along with links to where that image came from.
Clearview AI reportedly sources these images from a database of more than three billion images it has scraped from social media sites such a Facebook, Twitter and YouTube.
This database is far larger than any known to have been previously collected, including by US government and law enforcement. It is unprecedented in scale and as such raises huge privacy questions.
As Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, told the New York Times, “The larger the database, the larger the risk of misidentification because of the doppelgänger effect.”
If ever you needed a warning to be cautious about the images you upload onto social media of yourself, your children, and any loved ones, this story is it.
The New York Times article notes that this is a technology that even the biggest of tech companies has held back on for moral reasons. It quotes a Chairman of Google back in 2011 who said the company had held back of this type of development because it could be used “in a very bad way.”
Clearview AI already being deployed in the USA and Australia
But Clearview AI’s technology is already being used in the USA. This is despite the fact that law enforcement bodies there openly admit they have little idea how it works or who is behind it.
The app has nevertheless been used to help solve a whole range of different offences including shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.
In one example, a bystander to a fight in Indiana recorded a fight between two guys that ended up with one shooting the other in the stomach. Police ran a still of the video through Clearview AI’s app and got a match to a Facebook profile that contained the man’s name. This meant, they claim, the case was solved in 20 minutes.
It is not just in the USA where law enforcement appears to have leapt on the potential of this app. Authorities in the Australian states of New South Wales and Victoria have confirmed that they are using facial recognition technology as part of their work but refused to confirm to the Sydney Morning Herald precisely which apps they were deploying.
A spokesperson for the New South Wales Police Minister, David Elliott, told the paper, “Face Matching Services are being implemented to provide law enforcement with a powerful investigative tool to identify people associated with criminal activities.”
Victoria Police also confirmed that they are using facial recognition cameras in some of the state’s busiest places as well as in their own police stations.
Clearview AI claims that it has more than 600 law enforcement clients from around the world but has refused to name who they are. It has also licensed the app to some private companies for security purposes.
The long term risks of the app
In their article, the New York Times reveals the results of their analysis of the code underpinning the app.
They reveal that it includes the possibility of pairing the app with augmented-reality glasses (think Google Glasses but better). This opens up the prospect of someone wearing these glasses and being able to identify everyone they see.
It could be used to stalk an attractive person you see on the bus or to identify and persecute people attending a political protest. The impact of such technology on things like the recent democracy protests in Hong Kong doesn’t bear thinking about.
As Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University has said, “The weaponization possibilities of this are endless. Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail.”
The app also hands an awful lot of power to Clearview AI too. The New York Times reporter found that when he asked police officers to put his own image into the app, the company contacted them to ask if they were speaking to the media.
This strongly suggests that Clearview AI is able to monitor every search run by a law enforcement agent anywhere in the world. This is a huge additional privacy risk on top of the obvious.
At a time when various countries and regions have banned the use of facial recognition and even the European Union recently began considering a ban in all public spaces, there was a sense that the tide was turning against the widespread use of facial recognition.
But Clearview AI shows the risk the technology can pose if it is taken to its logical conclusion. And what’s most terrifying is this is not a technology of the future. It is already being used by law enforcement around the world.