Almost everyone involved in facial recognition tech sees problems with it
An unusual consensus emerged recently among artificial intelligence researchers, activists, lawmakers and many of the largest technology companies: Facial recognition software breeds bias, risks fueling mass surveillance and should be regulated. Deciding on effective controls and acting on them will be a lot harder.
This week, the Algorithmic Justice League and the Center on Privacy and Technology at Georgetown University Law Center unveiled the Safe Face Pledge, which asks companies not to provide facial AI for autonomous weapons or to law enforcement unless explicit laws are debated and passed to allow it. Last week, Microsoft Corp. said the software carries significant risks and proposed rules to combat the threat. Research group AI Now, which includes AI researchers from Google and other companies, issued a similar call.
“Principles are great — they are starting points. Beyond the principles, we need to be able to see actions,” said Joy Buolamwini, founder of the Algorithmic Justice League. None of the biggest makers of the software — companies such as Microsoft, Google, Amazon.com Inc., Facebook Inc. and IBM — has signed the Safe Face Pledge yet.
Large tech companies may be reluctant to commit to this kind of pledge, even if they’re concerned about negative consequences of the software. That’s because it could mean walking away from lucrative contracts for the emerging technology. The market for video surveillance gear is worth $18.5 billion a year, and AI-powered equipment for new forms of video analysis is an important emerging category, according to researcher IHS Markit. Microsoft and Facebook said they’re reviewing the pledge. Google declined to comment.
“There are going to be some large vendors who refuse to sign or are reluctant to sign because they want these government contracts,” said Laura Moy, executive director of the Center on Privacy and Technology.
Microsoft is still selling facial recognition software to governments, and the American Civil Liberties Union took the company to task for that this week. It asked Microsoft to halt the sales and join the organization’s call for a federal moratorium on government use of the technology.
The use of facial recognition for surveillance, policing and immigration is being questioned because researchers, including Buolamwini, have shown that the technology isn’t accurate enough for crucial decisions and performs worse on darker-skinned people.
Providers have responded differently to the scrutiny. Microsoft is defending government contracts generally, while asking for laws to regulate the space. Amazon took issue with research by the ACLU into the Rekognition program it sells to police departments, but the company has also said it’s working to better educate police on how to use the software. Companies including Microsoft, Facebook and Axon, a maker of police body cameras, have formed AI ethics boards. Google published a set of more general AI principles in June.
The Safe Face Pledge asks companies to “show value for human life, dignity and rights, address harmful bias [and] facilitate transparency” and make these commitments part of their business practices. This includes not selling facial recognition software to identify targets where lethal force may be used. The pledge also commits companies to halt sales of face AI products that are not “subject to public scrutiny, inspection and oversight.”
There are also commitments to internal bias reviews as well as checks by outside experts, along with a requirement to publish easy-to-understand information on how these technologies are used and by which customers. Start-ups Simprints Technology, Robbie AI Inc. and Yoti Ltd. were the inaugural signers of the pledge.
“It’s kind of the Wild West when it comes to use of automated facial analysis technology, and it’s also an area that’s shrouded in secrecy,” Moy said. The Safe Face Pledge tries to address both areas, but Moy also believes new laws are needed.
That’s where Microsoft is focusing its attention. Last week it detailed the laws it would like to see passed. Microsoft President Brad Smith, who is also chief legal officer, put the chances of federal legislation in 2019 at 50-50, most likely as part of a broader privacy bill. But he said there’s a far better shot at getting something passed in a state or even a city next year. If it’s an important enough region — say, California — that would probably be enough to make software sellers change their products and practices overall, he said.
In the meantime, Microsoft said it would turn down some AI contracts where it had concerns, and already has. Smith wouldn’t specify which deals it has rejected, and he has also said that Microsoft would continue to be a key vendor to the U.S. government.
Microsoft will give the U.S. military access to ‘all the technology we create’ »
“We’ve turned down business when we thought there was too much risk of discrimination, when we thought there was a risk to the human rights of individuals,” Smith said.
In contrast, Amazon thinks it’s too soon to regulate. “There are many positive and important uses of this technology that are being implemented today, to include preventing human trafficking, reuniting missing children with their parents, and improving security,” the company said in a statement. “It is too early to come out with blanket statements supporting broad regulation, given this technology is in the early stages of deployment, and we have received no indications of misuse.” Still, the company said it will work with governments on standards and guidelines for the technology to maintain privacy and civil liberties.
Facebook said it’s committed to using the technology responsibly and supports thoughtful proposals. The social networking giant, which uses face recognition to identify people in photos that users post, said it’s eager to work with Microsoft and others on ideas.
While employees and customers can pressure companies to act ethically with regard to AI, more attention needs to be focused on laws and government oversight, said Ryan Calo, a law professor at the University of Washington, who is on the board of AI Now and gets funding from Microsoft. Without broad regulation, if some companies refuse to sell the software, others will step in.
“We have been attempting to get companies to cease providing tools to the government, rather than trying to ensure the government doesn’t do things we don’t agree with,” Calo said. “They are government agencies — we ought to be able to police them. We can’t ask technology companies to make it all go away.”
Bass writes for Bloomberg.