By Sally Ward-Foxton, 06.20.19
The UK is one of the most surveilled countries in the world, with closed-circuit televisions (CCTV) cameras everywhere, from shops to buses to private homes. In the last couple of years, AI techniques, such as neural networks, have propelled large-scale automated facial recognition. Combined with the UK’s huge network of existing CCTV cameras, this could hold vast potential for security applications.
A promotional picture for NEC’s NeoFace facial recognition technology. (Source: NEC)
However, there are worrying implications for privacy. Biometric facial data taken from photos or CCTV footage is particularly troubling because it can be taken without the person’s consent and without them knowing about it. This data may be collected indiscriminately — whether the subject is matched with a watch list or not — and it allows people to be located and for their movements to be tracked. It’s easy to imagine photos of crowds with every face linked to the person’s identity, then potentially connected to all kinds of other data about them.
If we are going to trade our privacy for increased security, the benefits must outweigh the costs. Is facial recognition technology actually effective in catching criminals?
Police forces around the UK have performed trials of live facial recognition technology in public places such as shopping malls, sports stadiums, and busy streets. The Metropolitan Police have undertaken 10 trials of the technology so far, using dedicated camera equipment along with NEC’s NeoFace algorithm, which measures the structure of each face, including distance between the eyes, nose, mouth, and jaw. According to NEC, this algorithm can tolerate poor quality image such as compressed surveillance video, with the ability to match images with resolutions as low as 24 pixels between the eyes.
Despite winning four consecutive NIST FIVE (U.S. National Institute of Standards and Technology, Face in Video Evaluation) awards for performance, NeoFace didn’t do terribly well in the field. In the eight deployments between 2016 and 2018, 96% of the Met’s facial recognition matches misidentified innocent members of the public as being on the watch list.
A major criticism of the trials has been that the public were not sufficiently alerted to them and were not given the opportunity to opt out. The BBC filmed one passerby in North London covering his face — the police photographed him anyway and gave him a £90 fine for disorderly conduct.
A South Wales Police van with facial recognition cameras. Text on the side of the van reads “Facial recognition fitted” in English and Welsh, but vans used by the Metropolitan Police were less clearly marked. (Source: Liberty)
South Wales Police’s more extensive trials of the technology were marginally more successful: only 91% of matches were misidentifications of innocent members of the public. Images of all passersby, matched or not, were stored by the police for 31 days.
This trial prompted one member of the public to take legal action against South Wales Police, having been photographed while out shopping and while at a peaceful anti-arms protest. He claims that automated facial recognition technology violated his right to privacy, interfered with his right to protest, and breached data protection laws (judgment is due this fall).Page 1 / 3 NEXT >
Sally Ward-FoxtonSally Ward-Foxton covers AI technology and related issues for EETimes.com and all aspects of the European industry for EETimes Europe magazine. Sally has spent more than 15 years writing about the electronics industry from London, UK. She has written for Electronic Design, ECN, Electronic Specifier: Design, Components in Electronics, and many more. She holds a Masters’ degree in Electrical and Electronic Engineering from the University of Cambridge.