Blog Archive

22 May 2018

More truth on AI - the rest of the story



Blog: My Unusual Road of Life....
by kerminator
More truth on AI
** UPDATED, May 17, 12:30 ET: This article has been updated with additional comment from Tesla. **

Date:   5/22/2018


Author: Tom Simonite
business
05.16.18 - 16:32

Tech Firms Move to Put Ethical Guard Rails Around AI
Microsoft, Facebook, Google and others are creating internal groups and reviewing uses of artificial intelligence.

HOTLITTLEPOTATO

One day last summer, Microsoft’s director of artificial intelligence research, Eric Horvitz, activated the Autopilot function of his Tesla sedan. The car steered itself down a curving road near Microsoft’s campus in Redmond, Washington, freeing his mind to better focus on a call with a nonprofit he had confounded around the ethics and governance of AI. Then, he says, Tesla’s algorithms let him down.

“The car didn’t center itself exactly right,” Horvitz recalls. Both tires on the driver’s side of the vehicle nicked a raised yellow curb marking the center line, and shredded.

Horvitz had to grab the wheel to pull his crippled car back into the lane. He was unharmed, but the vehicle left the scene on the back of a truck, with its rear suspension damaged. Its driver left affirmed in his belief that companies deploying AI must consider new ethical and safety challenges. Tesla says Autopilot is intended for use by a fully attentive driver.

At Microsoft, Horvitz helped establish an internal ethics board in 2016 to help the company navigate potentially tricky spots with its own AI technology. The group is cosponsored by Microsoft’s president and most senior lawyer, Brad Smith. It has prompted the company to refuse business from corporate customers, and to attach conditions to some deals limiting the use of its technology.

Horvitz declined to provide details of those incidents, saying only that they typically involved companies asking Microsoft to build custom AI projects. The group has also trained Microsoft sales teams on applications of AI the company is wary of. And it helped Microsoft improve a cloud service for analyzing faces that a research paper revealed was much less accurate for black women than white men.

“It's been heartening to see the engagement by the company and how seriously the questions are being taken,” Horvitz says. He likens what’s happening at Microsoft to an earlier awakening about computer security—saying it too will change how every engineer works on technology.

Many people are now talking about the ethical challenges raised by AI, as the technology extends into more corners of life. French President Emmanuel Macron recently told WIRED that his national plan to boost AI development would consider setting “ethical and philosophical boundaries.” New research institutes, industry groups, and philanthropic programs have sprung up.

Microsoft is among the smaller number of companies building formal ethics processes. Even some companies racing to reap profits from AI have become worried about moving too quickly.

“For the past few years I’ve been obsessed with making sure that everyone can use it a thousand times faster,” says Joaquin Candela, Facebook’s director of applied machine learning. But as more teams inside Facebook use the tools, “I started to become very conscious about our potential blind spots.”

At Facebook’s annual developer conference this month, data scientist Isabel Kloumann described a kind of automatic adviser for the company’s engineers called Fairness Flow.

It measures how machine-learning software analyzing data performs on different categories—say men and women, or people in different countries—to help expose potential biases. Research has shown that machine-learning models can pick up or even amplify biases against certain groups, such as women or Mexicans, when trained on images or text collected online.

Kloumann’s first users were engineers creating a Facebook feature where businesses post recruitment ads. Fairness Flow’s feedback helped them choose job recommendation algorithms that worked better for different kinds of people, she says. She is now working on building Fairness Flow and similar tools into the machine-learning platform used company-wide. Some data scientists perform similar checks manually; making it easier should make the practice more widespread. “Let's make sure before launching these algorithms that they don't have a disparate impact on people,” Kloumann says. A Facebook spokesperson said the company has no plans for ethics boards or guidelines on AI ethics.

'Let's make sure before launching these algorithms that they don't have a disparate impact on people.'

Isabel Kloumann, Facebook

Google, another leader in AI research and deployment, has recently become a case study in what can happen when a company doesn’t seem to adequately consider the ethics of AI.

Last week, the company promised that it would require a new, hyperrealistic form of its voice assistant to identify itself as a bot when speaking with humans on the phone. The pledge came two days after CEO Sundar Pichai played impressive—and to some troubling—audio clips in which the experimental software made restaurant reservations with unsuspecting staff.

Google has had previous problems with ethically questionable algorithms. The company’s photo-organizing service is programmed not to tag photos with “monkey” or “chimp” after a 2015 incident in which images of black people were tagged with “gorilla.”
Pichai is also fighting internal and external critics of a Pentagon AI contract, in which Google is helping create machine-learning software that can make sense of drone surveillance video.

Thousands of employees have signed a letter protesting the project; top AI researchers at the company have tweeted their displeasure; and Gizmodo reported Monday that some employees have resigned.

A Google spokesperson said the company welcomed feedback on the automated-call software—known as Duplex—as it is refined into a product, and that Google is engaging in a broad internal discussion about military uses of machine learning. The company has had researchers working on ethics and fairness in AI for some time but did not previously have formal rules for appropriate uses of AI. That’s starting to change. In response to scrutiny of its Pentagon project, Google is working on a set of principles that will guide use of its technology.

Some observers are skeptical that corporate efforts to imbue ethics into AI will make a difference.
Last month, Axon, manufacturer of the Taser, announced an ethics board of external experts to review ideas such as using AI in policing products like body cameras. The board will meet quarterly, publish one or more reports a year, and includes a member designated as a point of contact for Axon employees concerned about specific work.
LEARN MORE
The WIRED Guide to Artificial Intelligence

Soon after, more than 40 academic, civil rights, and community groups criticized the effort in an open letter. Their accusations included that Axon had omitted representatives from the heavily policed communities most likely to suffer the downsides of new police technology. Axon says it is now looking at having the board take input from a wider range of people.

Board member Tracy Kosa, who works on security at Google and is an adjunct professor at Stanford, doesn’t see the episode as a setback. “I’m frankly thrilled about it,” she says, speaking independently of her role at Google. More people engaging critically with the ethical dimensions of AI is what will help companies get it right, Kosa says.

None have got it right so far, says Wendell Wallach, a scholar at Yale University's Interdisciplinary Center for Bioethics. “There aren’t any good examples yet,” he says when asked about the early corporate experiments with AI ethics boards and other processes. “There’s a lot of high-falutin talk but everything I’ve seen so far is naive in execution.”

Wallach says that purely internal processes, like Microsoft’s, are hard to trust, particularly when they are opaque to outsiders and don’t have an independent channel to a company’s board of directors.

He urges companies to hire AI ethics officers and establish review boards but argues external governance such as national and international regulations, agreements, or standards will also be needed.

Horvitz came to a similar conclusion after his driving mishap. He wanted to report the details of the incident to help Tesla’s engineers.

When recounting his call to Tesla, he describes the operator as more interested in establishing the limits of the automaker’s liability. Because Horvitz wasn’t using Autopilot as recommended—he was driving slower than 45 miles per hour—the incident was on him.

“I get that,” says Horvitz, who still loves his Tesla and its Autopilot feature. But he also thought his accident illustrated how companies pushing people to rely on AI might offer, or be required, to do more.

“If I had a nasty rash or problems breathing after taking medication, there'd be a report to the FDA,” says Horvitz, an MD as well as computer science PhD. “I felt that that kind of thing should or could have been in place.” NHTSA requires automakers to report some defects in vehicles and parts; Horvitz imagines a formal reporting system fed directly with data from autonomous vehicles.

A Tesla spokesperson said the company collects and analyzes safety and crash data from its vehicles, and that owners can use voice commands to provide additional feedback.

Liesl Yearsley, who sold a chatbot startup to IBM in 2014, says the embryonic corporate AI ethics movement needs to mature fast. She recalls being alarmed to see how her bots could delight customers such as banks and media companies by manipulating young people to take on more debt, or spend hours chatting to a piece of software.

The experience convinced Yearsley to make her new AI assistant startup, Akin, a public benefit corporation.

AI will improve life for many people, she says. But companies seeking to profit by employing smart software will inevitably be pushed towards risky ground—by a force she says is only getting stronger. “It’s going to get worse as the technology gets better,” Yearsley says.

UPDATED, May 17, 12:30PM ET: This article has been updated with additional comment from Tesla. 

More Great WIRED Stories

No comments:

Post a Comment

Featured Post

The most powerful message ever preached in past 50 years !

 AWMI.com  **  The most powerful message ever preached in past 50 years !  10 Reasons It's Better to Have the Holy Spirit ...

Popular