Advertisement
This Article is From Jul 14, 2018

Facial Recognition Is Too Dangerous, Says Microsoft, Urging Intervention

Microsoft last month faced widespread calls to cancel its contract with Immigration and Customs Enforcement, which uses a set of Microsoft cloud-computing tools that can also include face recognition.

Facial Recognition Is Too Dangerous, Says Microsoft, Urging Intervention
Microsoft's calling for government regulation on facial-recognition software, one of its key technologies

Microsoft is calling for government regulation on facial-recognition software, one of its key technologies, saying such artificial intelligence is too important and potentially dangerous for tech giants to police themselves.

On Friday, company president Brad Smith urged lawmakers in a blog post to form a bipartisan and expert commission that could set standards and ward against abuses of face recognition, in which software can be used to identify a person from afar without their consent.

"This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike," Smith said. "The only way to regulate this broad use is for the government to do so."

Smith's announcement comes amid a torrent of public criticism aimed at Microsoft, Amazon and other tech giants over their development and distribution of the powerful identification and surveillance technology - including from their own employees.

Microsoft last month faced widespread calls to cancel its contract with Immigration and Customs Enforcement, which uses a set of Microsoft cloud-computing tools that can also include face recognition.

In a letter to chief executive Satya Nadella, Microsoft workers said they "refuse to be complicit" and called on the company to "put children and families above profits." The company said its work with the agency is limited to mail, messaging and office work.

The demand marks a rare call for greater regulation from a tech industry that has often bristled at Washington involvement in its work, believing government rules could hamper new technologies or destroy their competitive edge.

Smith wrote that the "sobering" potential uses of face recognition, now used extensively in China for government surveillance, should open the technology to greater public scrutiny and oversight. Allowing tech companies to set their own rules, Smith wrote, would be "an inadequate substitute for decision making by the public and its representatives."

The company, Smith said, is "moving more deliberately with our facial recognition consulting and contracting work" and has turned down customers calling for deployments of facial-recognition technology in areas "where we've concluded that there are greater human rights risks." The company did not immediately provide more details.

Regulators, Smith said, should consider whether police or government use of face recognition should require independent oversight; what legal measures could prevent the AI from being used for racial profiling; and whether companies should be forced to post notices that facial-recognition technology is being used in public spaces.

Smith also compared facial-recognition regulation with the public laws demanding seat belts and air bags in cars, saying the rules could be just as important as laws governing air safety, food and medicine. "A world with vigorous regulation of products that are useful but potentially troubling is better than a world devoid of legal standards," he said.

Civil rights and privacy experts have called for widespread bans on facial-recognition software, which they say could lead dangerously to misidentifications and more invasive surveillance by businesses, governments and the police.

The American Civil Liberties Union said Friday that Microsoft's announcement should serve as a wake-up call to Congress, whom they urged to "take immediate action to put the brakes on this technology."

Alvaro Bedoya, the executive director of Georgetown Law's Center on Privacy & Technology, said Microsoft's statement was an encouraging acknowledgment of the technology's potential threats to privacy.

"It's a great list of questions. But the real question is how the company would answer them . . . and what companies like Microsoft will say behind the scenes when legislation is actually being drafted and negotiated," Bedoya said.

"Should companies be able to scan the face of every man, woman, or child who walks down the street without their permission? Should the government be able to scan every pedestrian's face in secret?" Bedoya said. "Most Americans would answer those questions with a resounding 'no.'"

No federal law restricts the use of facial-recognition technology, though Illinois and Texas require companies to get people's consent before collecting "faceprints" and other biometric information. The systems are increasingly being used by federal authorities, police departments, local governments and schools to beef up security and surveillance systems.

The technology, however, is far from perfect, and researchers have shown how people of color are more likely to be mislabeled because of gaps in the data used to train the AI. Microsoft said last month that it had trained its systems to more accurately recognize different skin colors.

Amazon, one of Microsoft's key rivals in AI and cloud computing, offers its facial-recognition technology, Rekognition, to police departments at a low cost. (Amazon chief Jeff Bezos also owns The Washington Post.)

The Microsoft announcement highlights how the tech industry is grappling with how to both seek out the lucrative contracts offered by government authorities while also satisfying employees and customers urging the company to abide by ethical guidelines.

Last month, after Google was criticized for its artificial-intelligence contract with the Department of Defense, chief executive Sundar Pichai said the company would follow a newly established set of ethical standards, including a ban on AI for use in weaponry.

Facebook has also voiced interest in laws that would demand more transparency in online advertising, and its chief executive Mark Zuckerberg testified on Capitol Hill in April following Russian operatives' use of the social network as a way to potentially sway voters during the 2016 election.

(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com