advertisement
On Saturday, 12 March, 17 days after Russia launched its invasion, Ukraine's defence ministry began using US-based Clearview AI’s facial recognition technology, according to a Reuters report.
This will potentially allow Ukrainian authorities to vet people at checkpoints, unmask Russian assailants, combat misinformation and identify the dead.
He told the agency that his company has a database of more than 2 billion images from the Russian social media service VKontakte.
While the use of a powerful facial recognition software could be of immense help in Ukraine's efforts against the Russian invasion, Clearview AI’s facial recognition technology has a controversial history.
The company managed to stay under the public radar until early 2020, after which its usage by US law enforcement was widely covered in the media.
Here's all you need to know about Clearview AI:
Clearview AI, initially called Smartcheckr, was founded in 2017 by Hoan Ton-That, an app developer from Australia, and Richard Schwartz, an aide to former mayor of New York, Rudolph Giuliani, and an editor at the New York Daily News tabloid.
The two met at a conservative think tank in 2016 and started out with a small team of engineers who helped design a web crawling program and a facial recognition software, according to an expose by The New York Times.
Clearview AI soon began pitching to law enforcement agencies and saw widespread success. Clearview AI offered 30-day free trials to officers, who then recommended it to their departments because it could identify faces in seconds, according to the report.
The idea is simple: use facial recognition software to match faces against a huge database of photos collected from the internet.
Clearview AI reportedly runs a program that can automatically collect images of people’s faces from across the internet including news sites, employment portals, and, of course, social media platforms like Facebook, Instagram, and Twitter.
The facial recognition software that Clearview AI uses was originally derived from academic papers and converts all these images into vectors and sorts them into 'neighborhoods', according to NYT.
When a user uploads a fresh photo into Clearview’s system, it too is converted into a vector and matched to a 'neighborhood'. All the photos within this cluster are reportedly displayed to the user as well as links to the original websites, making it easy to put a name to a face.
Apart from law enforcement agencies, there have been multiple reports of individuals and private companies having access to the service.
When Clearview AI was starting out, they approached a range of private companies. Before they were put under the public eye, the app was reportedly being freely used by the company’s investors and clients, often to spy on others.
There were also security concerns. In 2020, Gizmodo reporters were able to access a version of the app which they found on a publicly accessible Amazon server. The same year, a data breach exposed its thousands of customers.
Buzzfeed reported that access to the app had also been granted to far-right activists, trolls, and conservative think tanks while Huffington Post linked the founder to far-right extremists and conspiracy theorists, some of whom may have been involved in the application's development.
Clearview AI pegs the accuracy of its service at close to a 100 percent, but the American Civil Liberties Union (ACLU) has called the claim misleading and the methodology absurd.
Clearview AI says that it is a "post-event investigative tool" and not a surveillance system. However, there are genuine concerns that such a tool will erode the privacy of those who access the internet.
After it came to light that Clearview AI was using images off of their websites, tech giants including Google, Facebook (now Meta), and Twitter sent cease and desist notices to the company.
Meanwhile, in Europe, a coalition of digital rights groups filed complaints with data-protection authorities in France, Austria, Italy, Greece, and the United Kingdom, alleging violations of the European Union's General Data Protection Regulation (GDPR).
In November 2021, the UK regulatory body threatened to fine Clearview AI and ordered it to stop processing data of UK citizens. In December, France also ordered the company to stop processing citizens’ data and gave it two months to delete any data it held. Australia has deemed its practices illegal.
Most recently, in March 2022, Italy’s data protection agency announced a 20 million euro penalty against Clearview AI for breaching the GDPR, ordered it to delete any data on Italians and banned it from processing data any further.
(With inputs from The New York Times, Reuters, Huffington Post, Buzzfeed and Gizmodo)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)
Published: undefined