Government use of facial recognition is growing despite backlash over bias and privacy concerns. In the U.S., 27 states now use ID.me’s services to assist with verifying identification for unemployment benefits and other services.
And Ukraine has started using Clearview AI’s facial recognition services at checkpoints. But it is now common across Europe.
Yet another tech monster has escaped its pen. Just deal with it.
BY:
Salvatore Nicci
Technology Analyst / Reporter
PROJECT COUNSEL MEDIA
18 March 2022 (London, UK) – U.S. adoption of facial recognition software hit a speed bump recently when the Internal Revenue Service dropped its controversial plan to require taxpayers to verify their identities using ID.me (via facial recognition technology) to obtain some services. The agency will allow people to do live, virtual interviews with agents instead. The original IRS plan sparked outrage from privacy advocates and resurfaced longstanding concerns about racial and economic bias in facial recognition software.
But as I noted, it was only a speed bump. Public and private institutions are charging ahead with deploying the technology anyway.
Why it matters: Facial recognition systems solve thorny identification problems for government agencies and businesses, but they also raise concerns over bias and privacy, particularly since the U.S. lacks strong data regulations and nobody envisions a national data protection policy any time soon, if ever.
The big picture: Despite the concerns, government use of facial recognition continues to grow in the U.S. and abroad.
• Twenty-seven U.S. states already use ID.me’s services to assist with verifying identification for unemployment benefits and other services. And this month, Washington state said it would start using ID.me in June.
• War-torn Ukraine has started using controversial company Clearview AI’s facial recognition services to “[let] authorities potentially vet people of interest at checkpoints, among other uses,” Reuters reported. And even though the European Parliament today called for a ban on police use of facial recognition technology in public places, its popping up all over Europe.
• Airports around the world are installing automated passport border control gates based on facial recognition technology. In recent years, airlines themselves have increasingly installed facial recognition screenings to monitor and regulate passengers boarding planes. The U.S. Transportation Security Administration (the agency of the U.S. Department of Homeland Security that has authority over the security of the traveling public in the United States) expects to make it mandatory at all airports by 2024.
• And, obviously, facial recognition software is not just for airport security and law enforcement anymore. It’s being used across retail stores, hospitals, casinos, sports stadiums, and banks. Last year in our end-of-the-year newsletter for our TMT (technology, media, and telecom) sector subscribers we noted that some of the most popular stores in the U.S. – including Macy’s and Albertsons – are using facial recognition on their customers, largely without their knowledge. As Dave Gershgorn has noted in his series on Medium, it was used at this year’s Super Bowl where cameras hidden underneath digital signs captured data on attendees, generating 60,000+ points of data on how long they looked at advertisements, their gender and age, and an analysis to try and identify weapons or whether they were on a watch list of suspicious persons. There was no notification to attendees this was happening.
• But let’s be clear: stores using facial recognition isn’t a new practice. Last year, Reuters reported that the U.S. drug chain Rite Aid had deployed facial recognition in at least 200 stores over 12 years ago before suddenly ditching the software. In fact, facial recognition is just one of several technologies store chains are deploying to enhance their security systems, or to otherwise surveil customers. Some retailers, for instance, have used apps and in-store wifi to track users while they move around physical stores and later target them with online ads.
What they’re saying: Regulators and civil liberties groups have pushed hard against the use of ID.me software in particular:
• Caitlin Seeley George, campaign director at Fight for the Future, has said: “What we saw with the IRS using facial recognition is a sign of things to come, actually things as they are. There is no way that facial recognition or other tools that collect biometric data can be used in a safe manner. We liken it to nuclear weapons – too dangerous and shouldn’t be used at all. The backlash to the IRS was swift but a Pyrrhic victory given so many Federal government agencies are adopting it”.
• Sens. Ron Wyden (D-Ore.), Elizabeth Warren (D-Mass.) and Sherrod Brown (D-Ohio) wrote to the Labor Department last month urging the agency to help state unemployment insurance programs move away from private facial recognition contractors.
• “States are rightfully looking for solutions to protect against fraud and identity theft. But no one should be forced to submit to facial recognition administered by a private company just to access essential government services”.
The intrigue: Facial recognition can solve many problems for organizations that need to verify a person’s identity, particularly given the obstacles raised during the pandemic.
• For instance, Washington state briefly stopped unemployment benefits in 2020, after finding fraudulent claims made with stolen Social Security numbers and other personal data that totaled $1.6 million, according to the Seattle Times.
• In 2019, the Government Accountability Office recommended that several federal agencies discontinue “knowledge-based verification,” which relies on a user’s knowing and presenting personal information (like answers to security questions) to prove their identity. The GOA said this verification “tends to be error-prone and can be frustrating for users, given the limitations of human memory.” They suggested an analysis of facial recognition software.
Between the lines: Facial recognition’s deployment in everyday consumer devices, like iPhone’s Face ID, has accustomed much of the public to its use.
• A Pew survey from 2019 showed that a majority of U.S. respondents trusted law enforcement to use facial recognition responsibly.
• Facial recognition is “used a lot of different ways, and some are really creepy and concerning, and some are really benign… And in some cases, in the U.S., it’s been used pretty safely and effectively,” said Jeremy Grant, managing director of technology business strategy at law firm Venable LLP, who consults with identity verification businesses.
Yes, but: The highest-profile face-recognition providers have spurred barrages of criticism for failing to play straight with the public. Just one push-back from Jordan Burris, former chief of staff in the White House’s Office of the Federal Chief Information Officer, who is now senior director of product market strategy at Socure, a digital identity verification provider that serves the public and private sector:
“A lack of transparency erodes public trust. Regardless of the tech used, there needs to be a focus on deep testing, security, privacy of the consumer, mitigating bias and getting the outcomes intended by the government”.
Reality check: Even if facial recognition tools provoked a broader groundswell, businesses have already made a massive data harvest over the last decade and built lasting databases of personal information:
• Clearview AI told investors last month that “almost everyone in the world will be identifiable” after it collects its targeted 100 billion facial photos, the Washington Post reported.
• Last year, TikTok updated its terms of service to say the popular app will automatically collect “faceprints and voiceprints” without details on how they will be used.
Facial recognition technology is probably not what we wanted as a society. But it’s way too late for that discussion.
We have covered facial recognition technology in numerous posts. Here are a few of our main points:
• Scraping is one of the more interesting quandaries in platform policies, because it comes with both obvious harms and significant benefits. Scraping is what enabled the malign facial recognition software dystopia known as Clearview AI to gather more than 3 billion images of people and sell them to law enforcement agencies. And scraping is also what enabled the NYU Ad Observatory to collect, with users’ permission, noteworthy evidence about political advertisements on Facebook for academic research.
• Scraping is possible because the World Wide Web is made of text and graphics, and text and graphics can be copied and pasted. If you are reading this post on your desktop, you could write code to scrape this entire article and post it as a series of tweets. If you’re one step more technologically sophisticated, you could write a script to scrape the entire archive of any blog and publish it as an e-book.
• Scraping has been around since the early days of the internet, when potentially valuable information was first left in plain sight on public pages. But recently, the incentives and the opportunities have multiplied. Social networks have become ever-larger repositories, presenting attractive targets for harvesters operating at scale. And the rise of machine learning has brought new incentives, as AI has turned the raw material into potential gold, especially photos. Clearview AI, for instance, has some of the most advanced AI in the world which it continues to use to build huge databases of images scraped as the raw material for its service.
And as we have noted: Clearview was banned by most law enforcement entities yet 2,000+ U.S. police departments and law enforcement agencies still use it without disclosure or thinking about how to deal with false positives.
• As we noted, there are many ways to scrape in volume. Many companies now make their data available through APIs, the digital “hooks” that others can use to connect to their systems. This reflects the creeping automation in the information realm, as well as a common business strategy. These days, companies often set their sights on becoming platforms, making themselves an indispensable resource for others. Becoming the go-to source for data on any subject is one way to achieve that. This might raise few misgivings for a company such as eBay, which wants to be seen as the definitive source for all product listings. But it is more troubling when personal information is at stake, especially personal photographs to confirm identity.
But it is the sprawling world-wide facial recognition industrial complex that goes pretty much unnoticed. The scope of this facial recognition activity by these companies is overlooked. Increasingly, individuals around the world are being recorded, identified, and tracked – by companies that remain in the shadows. Based on two different tracking surveys by NIST (the U.S. government organization responsible for setting scientific measurement standards and testing novel technology) and OneZero (the tech/science research firm) there are 45+ companies world-wide that sell facial recognition technology. About 25 sell to U.S. police departments and other U.S. law enforcement agencies, plus U.S. military intelligence entities:
Just 3 noteworthy points from those surveys:
• Toshiba, best known for making PCs, is running more than 1,000 facial recognition projects around the world (many with U.S. police departments), including identity verification systems at security checkpoints in Russia and for law enforcement in Southeast Asia.
• RealPlayer. Remember them? More than a decade before Spotify, and years before iTunes, there was RealPlayer, the first mainstream solution to playing and streaming media to a PC. Launched in 1995, within five years RealPlayer claimed a staggering 95 million users. It crashed in the dot-com bust … but then began dabbling in facial recognition software. It now sells that technology to U.S. public schools and for U.S. military drones, and even launched a surveillance project in São Paulo, Brazil that analyzes video from 2,500 cameras.
• Software contractor Microfocus is one of a handful of companies that are keeping the aging COBOL language alive by making facial recognition that can scale to thousands of CCTV cameras, scattered across the globe, with the ability to monitor those cameras from one central dashboard or multiple dashboards.
And then there is technology like Clearview AI and PimEyes, facial recognition websites that can track the same person’s image all around the internet. Both are used by police and law enforcement agencies around the world. PimEyes’ facial recognition engine is not as powerful as Clearview AI’s app and unlike Clearview AI it does not scrape most social media sites which Clearview AI does and is the main benefit for law enforcement. Facial recognition search sites were rare but are now growing like weeds.
Regulation? Well, that’s a problem. Yes, one of the main challenges is that facial recognition is mostly unregulated, but almost all current efforts to rein in the technology primarily focus on its use by government and law enforcement, not commercial use. And as we noted last year, the laws and spheres are so different it would be impossible to write a clean, clearly understood bill regulating both consumer and government. And while members of the U.S. Congress have proposed several ideas for giving customers more protection against private companies’ use of facial recognition, significant regulation at the Federal level is “The Impossible Dream”. In the vast majority of U.S. cities and towns, there are no rules on when private companies can use surveillance tech, and when they can share the information with police, or ICE (Immigration and Customs Enforcement), or even private ads.
And in Europe? Oh, the irony. The EU, (mistakenly) considered to have the most stringent data protection rules in the world, opened the barn doors, allowing facial recognition use years ago without even an attempt at a “think-thru”. Well, ok. These are the folks that gave us that cacophony of confusion called the GDPR. So really, no surprise.
So now, the EU is facing a backlash over new proposed artificial intelligence regulation rules it announced this April 2021. The rules allow for a limited use of facial recognition by authorities – but critics (and even the techno class that has diced the details agree) the carveouts will usher in a new age of biometric surveillance. As we noted in our briefing paper, the European Commission sought to strike a compromise between ensuring the privacy of citizens and placating governments who say they need the tech to fight terrorism and crime. The rules nominally prohibit biometric identification systems like facial recognition in public places for police use — unless in the case of “serious crimes,” which the Commission specified could mean cases related to terrorism, but which critics warn is such a vague term that it can (and will) open the door for all kinds of surveillance based on spurious threats. It also doesn’t mention anything about corporations using the technology in public places.
So what do you get? A proposal with a hodgepodge of vague terms and a complete misunderstanding of how AI works. Yep. Yet another cacophony of confusion that any good lawyer can drive a truck though.
No, facial recognition technology is probably not what we wanted as a society. But it’s way too late for that discussion.