How are your cheekbones? Good? If so, you could be a shoo-in for automatic facial recognition technology.

“One of the test data sets used very early in facial recognition was a test data set from a county in Florida where they got all the photographs of convicted felons and because they were on meth the cheekbones were fantastic – very easy to recognise,” according to Rachel Dixon, Privacy and Data Protection Deputy Commissioner at the Office of the Victorian Information Commissioner in Australia.

“Not actually terribly useful for when training facial recognition on regular people because some of us don’t quite have that kind of cheekbone feature.”

Dixon joined other international facial recognition technology experts for a panel discussion at Te Herenga Waka—Victoria University of Wellington.

The experts were at the university as part of a Law Foundation-supported research project that Associate Professor Nessa Lynch from the Faculty of Law is leading into how facial recognition technology is currently used or may be used in New Zealand, and the rights and interests that might be affected.

“Obviously, the technology has immense value in promoting societal interests such as efficiency and security but it also represents a threat to some of our individual interests, particularly privacy,” said Lynch introducing the panel.

She and her fellow researchers, due to report in mid-2020, are looking at regulation – “assessing what there is at the moment, how it might affect the technology, and what potential forms of regulation might look like”.

Dixon warned that facial recognition “is not the Swiss army knife of understanding who people are”.

She recommended limited use specific to the task at hand, with governance clearly delineated and contestability part of the programme.

“I think the really big problem we face here at the moment is that once the system has identified you as X it’s very hard, particularly for poor people, to get any natural justice out of that.”

Implementation of the systems is hard, said Dixon.

“The vendors will tell you this is automagical and that if you buy their system you will get these fabulous confidence rates. And for certain use cases that’s true. Most of you have been through SmartGates, for example. SmartGates have a relatively low rate of false positives and false negatives relative to some other systems. But the reason they do is they are static. They are installed in particular kinds of environments and you can tune them for those environments. They’re used for a particular use case. There are a lot of knowns.”

By contrast, said Dixon, “picking you out walking randomly down the street can be quite challenging. There’s a whole bunch of environmental factors there that go to essentially reducing the confidence level. And this is all probalistic. None of this is absolute. There is no one-to-one match. And by perturbing an image even a small amount you can make the machine-learning system think the person is a toaster. I’m not joking.”

At the same time, she said, “one of the challenges we have is overcoming the instances when a false positive is assigned. So I get a match for you as a person of interest. It is very hard to shake somebody’s perception [to make them believe] that is wrong after a machine has said it is right. This is true across a whole range of fields. There’s a lot of research to show that the first thing people are told is the thing they believe.”

Clare Garvie, a Senior Associate at the Center on Privacy and Technology at Georgetown Law in the United States, estimated that, conservatively, a quarter of her country’s 18,000 law enforcement agencies have access to a face recognition system, mostly for investigative purposes.

“With very few exceptions, there are no laws that govern the use of this technology either at a federal level or state and local levels. As a result, this technology has been implemented largely without transparency to the public, without rules around auditing or public reporting, without rules around who can be subject to a search. As a result, it is not just suspects of a criminal investigation that are the subject of searches. In many jurisdictions, witnesses, victims or anybody associated with a criminal investigation can also be the subject of a search. In our research, we found this includes the girlfriend of a suspected fugitive, somebody who liked the suspect’s photo on Facebook, somebody who asked ‘suspicious’ questions at a gun store.”

Lack of audits “means many agencies that use this technology are never checking, not only to see whether it’s being misused but whether it’s actually a successful tool. Whether they are catching bad guys or whether it’s a colossal waste of money. Which, in some jurisdiction, I suspect it is.”

Garvie spoke about one police force using facial recognition surveillance at a protest against police treatment of people in custody.

“Law enforcement agencies themselves have said this creates a very real risk of people being chilled, not feeling comfortable participating in public protest or public speech, particularly contentious speech, speech that calls into question police activity.”

In other cases, said Garvie, the person an algorithm ranked 319th most likely to match a suspect’s description was nonetheless the one police investigated and arrested, without turning the “quintessentially exculpatory evidence” of the ranking over to defence lawyers; other people’s features have been pasted on to photographs of faces before running them through a recognition system; and celebrity photographs have been put into the system to snag a supposedly lookalike suspect. “The exact wrong person expecting the right result. The case we found is they used the actor Woody Harrelson in place of this guy who was suspected.”

Garvie’s Center on Privacy and Technology advocates a moratorium on facial recognition technology until rules are in place.

Panellist Dr Joe Purshouse from the School of Law at the University of East Anglia in the United Kingdom agrees.

“In the UK, facial recognition has been trialled quite extensively by police forces and some private companies as an overt public space surveillance tool. They get a watchlist, scan a public space, sporting event or protest and look for people on the watchlist. The results of those trials – there’s quite a few reports out, the Science and Technology Select Committee [of the House of Commons] looked at this – show the benefits are minimal and uncertain. The human rights implications for privacy, freedom of assembly – those are chilling.”

And the burden, said Purshouse, falls unequally, tending to affect marginalised communities.

“Suspects of crime, people of lower socio-economic status who are forced to use public space and rely more heavily on public space than people who have economic advantages, perhaps.”

Leave a comment