News
A new court decision bans authorities from using facial recognition in the UK
Facial recognition has a bunch of issues.

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Privacy advocates from the UK are rejoicing, as a recent court victory prevents police from using facial recognition technology. Even though the ruling doesn’t completely exclude the use of face recognition technology in the UK, it significantly narrows the scope under which law-enforcing authorities can use it.
The court ruled that face recognition technology violates many human rights, and it features “fundamental deficiencies.”
To better grasp the significance of this ruling, we need to take a step back and understand what happened in 2017 when this case was opened.
It was the South Wales Police that first deployed an automated facial technology known as AFR. The tech was used to monitor several events, among which some soccer games. Police used the technology to identify individuals with open warrants, persons of interest, and other people wanted by the police.
In 2019, the use of facial recognition caught the public’s attention when Ed Bridges from Cardiff filed a suit against the police department because they scanned his face. He claimed that the police violated his human rights when they scanned his face in two instances, in 2018 and 2019.
Although Liberty, one of the most renowned human rights organizations in the UK, supported him, he lost his suit. However, his appeal at the Court of Appeal ruled in his favor and found that the police violated human rights.
According to the Court of Appeal, the police officers were given too much discretion to whom they could place on their watchlist. Plus, there weren’t any clear criteria that determine in which instances they can use AFR. On top of it, the court concluded that the police did not make any effort to determine whether the AFR exhibited gender, race, or other biases.
In 2018, the South Wales Police released a stream of data showing that 92% of all facial recognition made during 2018 were false positive. Similar face recognition system used in Detroit, USA, was even more misleading as it identified false-positives 96% of the time. Detroit’s police chief openly admitted this.
As a result, big tech companies such as Amazon and IBM that build facial recognition solutions have distanced themselves from law enforcement agencies. In June, IBM decided to stop manufacturing facial recognition systems completely, whereas Amazon decided to withdraw its face recognition systems from police use.
What are your views on allowing the government to use facial recognition technology? Is it essential to have this tech deployed? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.
Editors’ Recommendations:
- You can now prevent your online photos from being used by facial recognition systems
- France is trialing facial recognition tech that checks if people are wearing face masks
- Privacy-focused messaging app Signal now has a tool to automatically blur faces
- Clearview AI, the powerful facial recognition startup, let rich people and friends use its app
Follow us on Flipboard, Google News, or Apple News
