advertisement
Editor: Ashutosh Bhardwaj | Camera: Abhishek Ranjan
Union Home Minister Amit Shah revealed in Lok Sabha on Wednesday, 11 March, evening that law enforcement agencies deployed facial recognition software to identify over 1,100 individuals who allegedly partook in the communal violence in northeast Delhi on 24-25 February.
In a first of its kind admission, the home minister, speaking about the communally-charged violence that left at least 52 dead and over 500 injured said, “We have used using facial recognition software to initiate the process of identifying faces.”
While Shah did not specify the kind of facial recognition software used – if biometrics were used or which law enforcement agency deployed it – he stated that the software was fed with images from voter ID cards as well as driving licence databases, among others.
1. LEGALITY
A key concern in the use of evolving technology like facial recognition is the absence of an underlying legal framework to guide its usage while ensuring adequate protection of fundamental right to privacy.
Apar Gupta, executive director, Internet Freedom Foundation, which has raised concerns over the dangers of facial recognition systems, said, “All of this is being done without any clear underlying legal authority and is in clear violation of the Right to Privacy judgment.”
2. TRANSPARENCY
The question to ask is – What face recognition software are these databases being fed into? As of now, the software, its developer, its algorithm training, datasets are not known.
Shah did not specify which agency deployed the technology.
Moreover, an uptake in the use of facial recognition among law enforcement agencies in India comes at a time when countries in the West – US & EU – have taken drastic steps to ban or limit the use of the technology.
In May 2019, the city of San Francisco, the capital of tech and Silicon Valley in the United States, banned the use of facial recognition by law enforcement agencies owing to the potential for abuse and amid fears of the technology pushing the country into an overtly surveillance state.
3. BIAS
What Shah is implying through his response to Owaisi’s concerns is that algorithms are agnostic or blind to biases inherent in humans. However, this assumption has been debunked thoroughly through extensive research.
On the contrary, FRT algorithms in a number of commercial software have shown racial, ethnic and gender biases.
At least one study carried out at Massachusetts Institute of Technology has revealed that FRT from giants like IBM and Microsoft is less accurate when identifying females. In the US, many reports have discussed how such softwares are particularly poor at accurately recognising African-American women.
4. ACCURACY
There is no information on the specific software deployed and hence no knowledge of the accuracy levels of algorithm being used. The accuracy of FRTs has been one of the primary concerns in deploying the technology for security and law enforcement purposes.
Among the risks posed by low-accuracy software are false negatives and false positives in identifying individuals. While the former involves not identifying people involved in the violence, the latter would lead to incorrectly identifying innocent people.
This risk is exacerbated in situations which demand ‘1: Many identification’. This means identifying individuals from a pool of unknown persons.
Smriti Parsheera, fellow at the National Institute of Public Finance and Policy (NIPFP), authored a report on FRTs adoption and regulation of facial recognition technologies in India, in which she describes the risks with ‘1: Many identification’.
“For instance, trying to identify a person based on CCTV footage is complicated because of the conditions in which such images are captured as well as the uncertainty regarding whether that person is actually present in the database that is being used for matching,” it adds.
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)