Welcome to our

Cyber Security News Aggregator

.

Cyber Tzar

provide a

cyber security risk management

platform; including automated penetration tests and risk assesments culminating in a "cyber risk score" out of 1,000, just like a credit score.

Professor Fraser Sampson, Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner discusses facial recognition technology with SJUK

published on 2022-09-08 08:54:46 UTC by Jamie Marshall
Content:

Do you think that the police should be allowed to use facial recognition technology? I’m asked this question almost every day (which tells you something itself). When considering the answer, I picture a scene along the following lines:

A gunman opens fire in crowded underground train. He also detonates a device in the carriage. Schoolchildren are believed to be among the many passengers requiring medical treatment after suffering from bullet wounds, smoke inhalation, falls and panic. The gunman is at large somewhere in the underground system and the police become aware of his identity after finding a key to a rental van at the crime scene along with a 9mm semi-automatic handgun and extended magazines, a hatchet, fireworks, a liquid believed to be petrol, a dustbin and undetonated smoke grenades.

This is not from Netflix or Minority Report. This is the attack on the New York subway station at 36th Street, Sunset Park in Brooklyn on 12 April this year as reported in The Guardian. The hunt for the suspect, Frank R. James, involved multiple federal and state agencies and hundreds of officers. It lasted 29 hours.

Freeze frame at one hour after the incident and, in many ways, you have the exemplar use case for Live Facial Recognition in law enforcement. A terrorist attack in a densely populated city, taking place on a transport system equipped with extensive surveillance camera systems; an identified suspect and available images of what he looks like. He’s armed, he’s fired 33 rounds into a crowded carriage and detonated a device. And he’s missing.

If at this moment the police had the technical capability to feed his image into the combined surveillance systems in the area and ‘instruct’ the cameras to look for their suspect, on what basis could they responsibly not do so? And that’s the “if” that we have to recognise now because, while the technology is available, the parameters of where, when, how and by whom it can be used in a wide spectrum of cases – most of them very different from the extreme illustration depicted above – are not.

In the Brooklyn case it doesn’t appear that such a level of surveillance capability was available. Instead, law enforcement agencies named the suspect and released his picture, urging the public to keep sending them footage from the crime scene and elsewhere as they pieced together his movements. This response and the reliance on the citizen’s technology – and their willingness to share it – are also a critical feature of how police surveillance of public space has shifted. Here’s why.

The surveillance relationship

Public space surveillance by the police in England and Wales is governed largely by the Surveillance Camera Code of Practice. But practice has moved on from the world originally envisaged by the Code’s drafters, a world in which the police needed images of the citizen to one where they also need images from the citizen. Following any incident, many police forces now make public requests for images that might have been captured on personal devices.

In the Brooklyn case, an FBI spokesperson reportedly said afterwards that they could not have apprehended the suspect without the public’s help. It’s important to recognise how this aspect of the ‘surveillance relationship’ with the citizen has changed since the current regulatory frameworks for surveillance were introduced. Often, the citizen is also capturing images of the police themselves: the faces of many law enforcement personnel who visited the Brooklyn crime scene were broadcast worldwide on global news channels.

As people now have access to surveillance tools that only a decade ago were restricted to state agencies, the risks of facial recognition technology being used to frustrate vital aspects of our criminal justice system such as witness protection, victim relocation and covert operations are obvious, yet this aspect receives little attention in the many debates on the subject. Some might say that, if a city were to synthesise its overall surveillance capability – across its transport network, street cameras, traffic cams, dash cams, body-worn devices and employee’s smart phones, this amounts to the same thing as asking citizens to send in their images only in a far more efficient, effective and less randomly intrusive way.

Arguably it is, but in order to arrive at the freeze-framed moment above, a city would first need to develop a fully integrated public space surveillance system equipped with facial recognition technology, sound and voice analytics, vehicle licence plate readers and a host of other features none of which would be discernible to the naked eye. Such cities already exist but not in the UK and in many the technology is used by the state in a way that would be unlawful and probably unacceptable here.

Once installed, such an integrated surveillance system would be capable of many everyday things, falling way short of extreme terrorism incidents. It would, for example, be spectacularly efficient at ticketing barriers, only letting through those passengers known to have bought a ticket or travel card.

Technologically it would be unsurpassable in its ability to find people wanted for other crimes – from serial rapists to motorists who’d dodged speeding fines – and finding people believed to have breached immigration rules, broken curfews or skipped bail. And it would be ruthlessly vigilant in excluding known sex pests from the city’s transport system if it were programmed so to do. But would the use of such newly intrusive measures be justified in these less-than-terrorism cases? If so, in which ones and who would decide? In order to answer those questions, we need to know a lot more about the system and the accountability of those using it.

Possible, permissible, acceptable

First, we need to look at the system and its use from three perspectives: the technological (what’s possible) the legal (what’s permissible) and the societal (what’s acceptable). How good is the technology at accurately identifying the face it’s looking for in a dynamic crowd? Is it equally accurate across all faces irrespective of their age, ethnicity, structure and skin tone? How many millions of faces is it proportionate to scan in order to find a person who failed to appear at court for being drunk and disorderly?

Many facial recognition algorithms need to be ‘trained’ which means scanning as many manifestations of physiognomy as possible including those of children and other ‘categorisations’ of intersectionality. How far people are even aware of these features in what are not simply ‘cameras’ but powerful computers is unclear but, if the use of the technology is to be acceptable to the citizen, it will need to be both understood by and accountable to them.

When the surveillance system is only comparing the picture that you freely uploaded when buying your travelcard with your face as you approach the station (one-to-one matching), that raises different issues from when it scans millions of moving faces to pinpoint you in a crowd (one-to-many matching).

If it’s looking for you on a one-to-many basis, you probably didn’t agree to this so where did it get your image in the first place? If it’s looking solely for people ‘wanted’ by the police, how serious does the offence of which you’re suspected have to be before it’s used? A breach of COVID-19 rules?

How about a suspicion that a specific group of people are carrying an infection or where the system’s acting on the instructions of another country looking for one of their citizens under an Interpol Red Notice? Should the system be used where the person being looked for is not a danger to us but is vulnerable such as a missing child or someone who is unable to look after themselves?

Trust and confidence

Second, we need to have trust and confidence in both the technology and our technology partners. The very specific role of facial recognition technology used by the police in identifying and persecuting Uyghur Muslims in Northern XinJiang Province, China has been recognised by our government, along with the direct involvement of companies such as Hikvision and Dahua. This has made the introduction of facial technology in other countries all the more sensitive important and if our police are to retain the trust and confidence of communities here, we need conspicuous ethical leadership.

As most of the UK’s biometric surveillance capability is in private ownership, the police will only be able to harness the legitimate opportunities it brings by working in trusted surveillance partnerships. Whether it’s on the basis of their security, their human rights record or simply their willingness to engage in public scrutiny, private surveillance companies that want to work with the police will need to demonstrate their trustworthiness, while the police will need to take greater care over whose corporate company they keep.

Regulation and standards

Finally, what are the rules governing this technology, activity and accountability and where are they set out? Are there minimum standards for algorithms before they can be used in our streets and stations and schools? Do the companies have to be accredited before the images they produce can be used in evidence to support a prosecution?

Who do I go to if I want more information or to register a complaint? This whole area involves privacy and data protection considerations but it’s not ‘just’ data any more than it’s ‘just’ photography. In a technology driven world where decisions are increasingly likely to be automated, the need to provide clear oversight and accountability is all the greater. Otherwise, to paraphrase Arendt, we’ll have surveillance tyranny without a tyrant.

The surveillance question of our times

Last month I was pleased to be invited to speak at the launch of the Ada Lovelace Institute’s three-year research into the challenges of biometric technology. The Ryder Review considers the legal and societal landscape against which future policy discussions about the use of biometrics will take place and the extent to which the current distinctions between established regulated biometrics (fingerprints and DNA) and others such as facial recognition adequately reflect both risk and opportunity.

The event noted that it’s over a decade since the government abandoned the concept of compulsory ID cards, yet we are morphing from a standard police surveillance model of humans looking for other humans to an automated, industrialised process (as some have characterised it, a move from line fishing to deep ocean trawling).

In that context we should recognise concerns that we may be stopped on our streets, in transport hubs, outside arenas or school grounds on the basis of AI-generated selection and required to prove our identity to the satisfaction of the examining officer or of the algorithm itself. The ramifications of AI driven facial recognition in policing and law enforcement are both profound enough to be taken seriously and close enough to require our immediate attention.

Following the arrest of the suspected Brooklyn attacker, Keechant Sewell, the New York City Police Commissioner said “We were able to shrink his world quickly, so he had nowhere left to turn”.

Facial recognition technology will dramatically increase the speed at which the police can shrink the world of a terror suspect on the run in the future. How far it should be allowed to shrink everyone else’s world in doing so is the surveillance question of our times – and so far it has not been answered.

Fraser Sampson

By Fraser Sampson, Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner

Article: Professor Fraser Sampson, Commissioner for the Retention and Use of Biometric Material and Surveillance Camera Commissioner discusses facial recognition technology with SJUK - published about 2 years ago.

https://securityjournaluk.com/facial-technology-for-policing/?utm_source=rss&utm_medium=rss&utm_campaign=facial-technology-for-policing   
Published: 2022 09 08 08:54:46
Received: 2022 09 08 10:13:01
Feed: Security Journal UK
Source: Security Journal UK
Category: Security
Topic: Security
Views: 2

Custom HTML Block

Click to Open Code Editor