The Los Angeles Police Department (LAPD) has banned the use of facial recognition after officers allegedly used it without authorisation.
Facial recognition systems have faced increased scrutiny in recent years due to repeated studies showing they have serious biases.
Wherever facial recognition systems are deployed, the public needs to have faith they’re being used fairly—something which no deployment so far has been able to do.
Public trust in facial recognition has been damaged further this week after 25 LAPD officers were accused of using it unofficially to try to identify people nearly 475 times over a three-month period.
A directive reportedly sent to the entire LAPD, by deputy chief John McMahon and the head of the department’s IT arm, said that officers can now only use the LA County’s own ID system which uses images taken by officers and added after arrests—not the controversial Clearview AI system which uses millions of images harvested from across the web.
In June, Detroit Police chief James Craig said facial recognition would misidentify someone around 96 percent of the time.
Craig’s comments were made just days after the ACLU (American Civil Liberties Union) lodged a complaint against Detroit Police following the harrowing wrongful arrest of black male Robert Williams due to a facial recognition error.
Detroit Police arrested Williams for allegedly stealing five watches valued at $3800 from a store in October 2018. A blurry CCTV image was matched by a facial recognition algorithm to Williams’ driver’s license photo.
Current AI algorithms are known to have a racism issue. Extensive studies have repeatedly shown that facial recognition algorithms are almost 100 percent accurate when used on white males, but have serious problems when it comes to darker skin colours and females.
Facial recognition is increasingly being used to perform mass surveillance on protests such as Black Lives Matter. Given the evidence showing the bias of such technologies against minorities, it’s a recipe for disaster.
Across the pond in the UK, facial recognition tests have also been nothing short of a complete failure. An initial trial at the 2016 Notting Hill Carnival led to not a single person being identified, while a follow-up trial the following year led to no legitimate matches but 35 false positives.
An independent report into the Met Police’s facial recognition trials, conducted last year by Professor Peter Fussey and Dr Daragh Murray, concluded that it was only verifiably accurate in just 19 percent of cases.
In June, an open letter was penned by 1000 experts in response to a chilling paper called ‘A Deep Neural Network Model to Predict Criminality Using Image Processing.’
“Machine learning programs are not neutral; research agendas and the data sets they work with often inherit dominant cultural beliefs about the world,” the letter’s authors warned.
Until it can be proven that facial recognition and machine learning systems are unbiased, deployments will cause harm to both victims and the public’s tolerance and perception of such technologies.
Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.