smart cities

Face-off: Activists protest AI surveillance

Law enforcement increasingly uses AI-powered mass surveillance in smart cities to catch crooks. However, privacy advocates warn these solutions could harm people’s freedoms. Eric Johansson finds out more.

Ella Jakubowska doesn’t worry about living in a dystopia, but only because she believes we already live in one. What’s more, she says, unfettered AI-powered mass surveillance could make it worse.

“We should think about how technology exacerbates the dystopia within which we already live,” she says, paraphrasing a quote from privacy activist Dan McQuillan.

Jakubowska is a policy and campaigns officer at the privacy advocacy group European Digital Rights (EDRi). Since its launch in 2002, the organisation has vocally opposed the rise and use of facial recognition software, predictive analytics and different forms of biometric tracking, both by law enforcement and by private companies. Privacy advocates believe these tools can be used to keep citizens under constant scrutiny, monitor journalists when they meet anonymous sources, track dissidents and discourage activists from participating in legal protests. Think Big Brother but with the power of AI and you get the gist.

"Not only is the use of these technologies really infringing on every person's right to privacy and data protection, but because they are being used in a way that amounts to mass surveillance they also have an impact across potentially the full range of people's human rights,” Jakubowska warns.

In the summer of 2020, EDRi launched a coalition of 60 different European organisations opposing facial recognition software: Reclaim Your Face.

AI solutions are now headline staples. Flick through a newspaper and you’re bound to read about AI-powered biometrics being used to identify suspects at the failed storming of Capitol Hill and Black Lives Matter activists, how such biometrics have been central to digitally tracking the spread of Covid-19 and have grown commonplace in apps to ensure safe digital onboarding.

Other noteworthy stories include the Moscow police announcing in January 2020 that it would begin to use facial recognition software provided by NtechLab to track down criminal suspects. The following month, the New York Times revealed how Clearview AI had scraped billions of pictures from social media sites to create a database for its facial recognition platform, used by thousands of police departments around the world. The revelations caused a massive backlash as the startup had not asked for permission from the people in the pictures or from hosting platforms like Facebook. The company did not respond to requests for comment to this story.

In China, the Xi Jinping regime used the pandemic as a pretext to ramp up the rollout of its Skynet mass surveillance programme, including installing more CCTVs, ramping up the usage of facial recognition software, big data analysis and its hotly debated social credit system that penalises Chinese citizens for misdeeds such as eating on public transport or playing loud music. Individuals falling short in their behaviour could, among other things, face restrictions on movement or limited educational opportunities.

For privacy activists like Jakubowska, this seemingly unchecked scope represents a very real threat for the most vulnerable in society.

“We already have so much inequality in our societies,” she says, fearing that the technologies will be used “to codify and hide those structural societal problems behind the lens of false scientific technological neutrality in a way that legitimises them.”

AI in the sky

The skyrocketing use of AI-enabled mass surveillance is intimately linked to the rollout of smart city technologies. Smart cities are often championed for their ability to boost citizens’ quality of life with solutions to prevent traffic congestion, boost journey aggregation, optimise rubbish collections and electricity usage, as well as assist law enforcement in making the streets safer. But for privacy activists, smart cities represent a clear and present danger.

“They’re a perfect Trojan horse for mass surveillance because under this guise of genuinely legitimate and good aims, like environmental protections making cities more sustainable, they introduce this vast range of devices that often have the capability to track people,” says Jakubowska.

Craig Young, principal security researcher at Tripwire, warns that the speed of innovation will only exacerbate risk in the years to come.

“Deep learning systems are already good at surveillance tasks but, in coming decades, a generalised machine intelligence could enable surveillance to a degree that would make even George Orwell blush,” he cautions.

The companies behind these solutions usually don’t share these concerns or instead suggest they’re overblown. The global facial recognition industry – which Grand View Research expects will be worth $12.8bn by 2028 – would rather highlight how its solutions can shield citizens from harm.

"I clearly feel safer in a city centre late at night, knowing that there is CCTV and it's being used properly," says Zak Doffman, CEO of Digital Barriers, the internet of things-enabled security company.

I clearly feel safer in a city centre late at night, knowing that there is CCTV and it's being used properly.

Examples of AI-linked biometrics usage include spotting terrorism suspects in crowds, finding missing children, preventing fraud, reducing violence at football matches, speeding up passport controls at airports, making night-open stores safer by reducing the risk of robberies and preventing shoplifting, stopping fraudsters, making banking apps safer, supercharging healthcare efficiencies, tracking worker attendance, helping to identify elderly and lost dementia sufferers, and enabling gambling companies to protect customers at risk of addiction.

“It is useful. It is valuable,” Doffman says. “Police leaders will tell you it is a fantastic technology to help keep people safe, but it needs to be used properly. The challenge we have right now is that we have potentially created a level of fear within the public that we need to undo because that has made having sensible debates much more difficult.”

Jakubowska shoots back against this argument as one that ignores reality.

"If we compare all of the harms that are at play with the use of this technology, it vastly outweighs any limited benefit law enforcement might get,” she insists. “A couple of suspects might have been identified [at the Capitol Hill riot], but the technology has been used across the US to persecute people of colour. We've seen a series of high profile wrongful arrests of, in particular, black men across the US.”

While several of these cases have made the headlines, Doffman states it’s still better than the alternative. "If you talk to the police, then they'll tell you that mistaken identity goes with the territory of policing,” he says. “If you are policing millions of people, occasionally you're going to come up with a mistaken identity.”

He argues that AI may get it wrong from time to time, but that the risk is significantly reduced compared to police officers only relying on vague descriptions and photographs.

“Let's be very clear: a computer system is significantly better at recognising an individual than any human being in the world is able to do and it can do it across many more people, but it's not flawless,” Doffman attests. “Therefore it needs to sit as part of a process to quickly work through and then eradicate mistakes.”

Others suggest that while these cases happen, that doesn’t equate to facial recognition software being inherently discriminatory.

"Technology is not racist by itself," says Laura Blanc Pedregal, CMO at facial recognition company Herta. "Our responsibility as researchers and engineers is to provide the best technology possible and to continue our activity in order to obtain unbiased technology."

The pandemic has also introduced additional use cases. As lockdowns ease in tandem with vaccine rollouts, industry representatives are confident that more venues will elect to use their solutions to ensure that everyone at a nightclub has been vaccinated. Some envision that the nighttime economy could similarly benefit from taxis and ride-hailing companies introducing facial recognition software to ensure the safety of drivers and passengers.

“You don't have to use that taxi if you don't like the idea that a taxi driver subsumed to that,” says Tony Porter, former UK surveillance camera commissioner and current chief privacy officer at facial recognition provider Corsight AI. “But actually, if there's a market edge and that company has decided that young men and women going home at night want to feel safe and secure, then surely it's up to them and the company to say this is how we operate.”

The suggestion that “if you don’t like it, then go somewhere else” doesn’t sit well with privacy advocates.

“It's a horrible premise underlying those systems,” Jakubowska objects. Pointing at the British grocery store chain the Co-op, which trialled facial recognition solutions to reduce retail crimes in 2020, she argues that these systems’ inability to introduce an opt-out option abuses people’s rights to privacy. In cases where users have no other local stores, surveillance tools could prevent “people from accessing food.”

“We have a right to shop without our data being violated,” Jakubowska argues.

We have a right to shop without our data being violated.

Industry representatives suggest activists’ concerns are overblown and stoked by fear of examples like China. They argue activists miss the point and fail to engage in serious and productive debates.

"There is a very vocal lobby that will sound the same warning signals, but don't appear to engage in some of the deeper discussions, such as the real value of this technology to society and where that sits," says Porter. ”I've been on many platforms where I've heard the classic phrase, ‘this is chilling’ and it's almost as if that phrase together with ‘dystopia’ is enough to win an argument.”

Others suggest that a few bad apples – such as companies like Clearview AI, although they don’t mention them by name – don’t help the situation.

"We feel that the industry has not helped itself by trying to run too fast and losing public support," says Doffman. "It needs to scale back. It needs to do this sensibly. We need to build on the success we've seen with identity assurance and educate the public that it has nothing to fear.”

He encourages the industry to be transparent with the media about how solutions work, what their limitations are and how they can be used.

"When you explain it, you get rid of a lot of the silly mystique that surrounds the technology," Doffman says.

event details 

Watching the watchmen

Given the public counterblast to the rollout of facial recognition tools, industry leaders are, unsurprisingly, embracing pushes for clear guidance from lawmakers. “Regulation is crucial in this case,” says Blanc Pedregal. “The balance between both privacy and security must be fixed.”

Lawmakers have responded to activists’ concerns in the US. Bans against facial recognition software have been introduced in cities like Portland, Boston and Los Angeles as well as statewide bans in places like Massachusetts.

Across the pond, the European Commission proposed wide-sweeping reforms prohibiting certain usage of AI such as banning a China-style credit system in April. The EU reforms would also restrict law enforcement and companies’ usage of facial recognition software unless they meet certain standards. Companies breaking the rules could be fined €30m or up to 6% of their total worldwide annual turnover, whichever is higher.

Margrethe Vestager is the executive vice president for A Europe Fit for the Digital Age at the European Commission. She made her name launching massive antitrust cases against Silicon Valley goliaths like Google and Apple.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” says Vestager.

Privacy advocates, however, were left disappointed. “It falls very short in the protection of people's fundamental rights,” says Jakubowska, adding that “the fact that we've got any kind of biometric mass surveillance [restrictions are] a limited win.”

Prior to the introduction of the proposals, a group of 40 members of the European Parliament (MEP) had signed an open letter arguing for an outright ban of these technologies.

Porter suggests that the MEPs had failed to understand what they were signing “because I understand how very devilishly complicated it is and how in my experience, I've come across a lot of officials [and] ministers that actually are genuinely unaware really of the dichotomy of issue.”

Porter adds that he’s tired of the notion that big tech companies are unethical.

“Well, actually, I think people that speak on behalf of citizens have an ethical obligation to understand what they're talking about, what the dynamics are, and I'm not entirely convinced we're at that position yet," he says.

Others, like Doffman, believe the growing regulatory scrutiny may be harsh to begin with, but are confident that they will be relaxed once the usefulness and benefits of facial recognition tools become clearer.

When and if the new legislation will be introduced is still anyone’s guess. The process between proposal and ultimate ratification could take years to complete. In the meantime, facial recognition companies will simply have to grow accustomed to being scrutinised by the public.

“The controversy of AI will always be there,” concludes Blanc Pedregal. “The same happened when the video surveillance camera industry started several years ago. Nowadays, every security project includes cameras and everybody thinks that this is normal. The EU wants to protect its citizens and this is positive for all of us.”

BACK TO TOP