Why King’s Cross facial recognition tech is proving so controversial
Use of hi-tech surveillance systems raises fears over the public’s privacy
Facial-scanning technology used in the King’s Cross area of London to track “tens of thousands of people” has come under fire from privacy campaigners.
The 67-acre site, which has recently been redeveloped to include more housing and a new British headquarters for Google, features “multiple cameras” that monitor the activity of visitors, the Financial Times reports.
Argent, the site’s developer, told the BBC that the technology had been deployed “in the interest of public safety” and compared the area to other public spaces.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
But privacy campaigners fear that private firms could use the technology to conduct “secret identity checks on the public”, The Daily Telegraph notes.
How does face-recognition tech work?
Simply put, face-scanning technology uses a combination of cameras and artificial intelligence (AI) to scan and register the details of a person based on their facial profile.
According to The Guardian, a computer “scans frames of video” and allocates a “vector” to each face, which essentially maps and converts a person’s facial profile into a quantifiable data format.
The data is then cross-checked with “people on a watchlist”, before being ranked and presented for a human moderator to review, the newspaper notes.
In the case of the King’s Cross area, an Argent spokesperson said that facial recognition was one of “a number of detection and tracking methods” used in the developed zone, the BBC reports.
However, the spokesperson insisted that the firm has “sophisticated systems in place to protect the privacy of the general public”.
What legal hurdles do face-recognition systems bring?
Under the European Union’s General Data Protection Regulation (GDPR) laws, introduced last May, face-scanning cameras are classified as systems that “collect information that is inherently personal”, according to The Times.
The technology is legal, provided organisers inform the public that such systems are in place and how their data will be processed, the newspaper adds. Information can be collected through facial-detecting systems only for “legitimate interest”, such as for security, but it cannot be passed on to third parties for marketing purposes.
Argent insists that the systems it uses in King’s Cross are for public safety and to offer “best possible experience”, the Times reports.
But the FT claims that the developer has not confirmed how many cameras are in use in the area, nor how long the system has been in place.
A similar system is also set to be installed across a 97-acre estate in Canary Wharf, east London, though the technology will not be used to monitor pedestrians and workers continuously, sources close to the matter told the paper. Instead, face-scanning tech will be “limited to specific purposes or threats”.
How have privacy campaigners responded?
Silkie Carlo, director of the non-profit privacy group Big Brother Watch, told the Daily Telegraph that “huge areas of our capital have been sold off, privately policed, and are now being covered with Chinese-style surveillance.
“Private companies are asserting the right to monitor and secretly conduct identity checks on tens of thousands of us,” she said. “What happens with our data is anyone’s guess.”
Meanwhile, Hannah Couchman, a policy and campaigns officer at human rights group Liberty, told the Times that the tech is “more likely to misidentify people of colour and subject them to an intrusive and unjustified stop.
“There has been no transparency about how this tool is being deployed.”
The use of face-scanning cameras has also attracted the attention of the Information Commissioner’s Office (ICO), an independent data regulatory office that reports to the Government.
“The ICO is currently looking at the use of facial recognition technology by law enforcement in public spaces and by private sector organisations, including where they are partnering with police forces,” it said in a statement. “We’ll consider taking action where we find non-compliance with the law.”
Create an account with the same email registered to your subscription to unlock access.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
'A direct, protracted war with Israel is not something Iran is equipped to fight'
Instant Opinion Opinion, comment and editorials of the day
By Harold Maass, The Week US Published
-
Today's political cartoons - April 17, 2024
Cartoons Wednesday's cartoons - political anxiety, jury sorting hat, and more
By The Week US Published
-
Arid Gulf states hit with year's worth of rain
Speed Read The historic flooding in Dubai is tied to climate change
By Peter Weber, The Week US Published
-
AI is causing concern among the LGBTQ community
In the Spotlight One critic believes that AI will 'always fail LGBTQ people'
By Justin Klawans, The Week US Published
-
When even art is artificial
Opinion The AI threat to human creativity
By William Falk Published
-
The push for media literacy in education amid the rise of AI
In the Spotlight A pair of congresspeople have introduced an act to mandate media literacy in schools
By Justin Klawans, The Week US Published
-
The complex environmental toll of artificial intelligence
The explainer AI is very much mostly not green technology
By Devika Rao, The Week US Published
-
Artificial history
Opinion Google's AI tailored the past to fit modern mores, but only succeeded in erasing real historical crimes
By Theunis Bates Published
-
AI is recreating the voices of mass shooting victims
The Explainer The parents of these victims are using the AI to try and lobby Congress for gun control
By Justin Klawans, The Week US Published
-
The murky world of AI training
Under the Radar Despite public interest in artificial intelligence models themselves, few consider how those models are trained
By Austin Chen, The Week UK Published
-
Is Google's new AI bot 'woke'?
Talking Points Gemini produced images of female popes and Black Vikings. Now the company has stepped back.
By Joel Mathis, The Week US Published