Leading researchers gathered this week for the prestigious Neural Information Processing Systems conference have a new topic on their agenda. Alongside the usual cutting-edge research, panel discussions, and socializing: concern about AI’s power. The issue was crystallized in a keynote from Microsoft researcher Tuesday. The conference, which drew nearly 8,000 researchers to Long Beach, California, is deeply technical, swirling in dense clouds of math and algorithms. Crawford’s good-humored talk featured nary an equation and took the form of an ethical wake-up call. She urged attendees to start considering, and finding ways to mitigate, accidental or intentional harms caused by their creations.
“Amongst the very real excitement about what we can do there are also some really concerning problems arising,” Crawford said. One such problem occurred in 2015, when Google’s photo service. More recently, researchers found that image-processing algorithms both. Free Download Program Aemc Model 1250n Manual Arts. Crawford told the audience that more troubling errors are surely brewing behind closed doors, as companies and governments adopt machine learning in areas such as, and finance. “The common examples I’m sharing today are just the tip of the iceberg,” she said. In addition to her Microsoft role, Crawford is also a cofounder of the AI Now Institute at NYU, which studies.
Concern about the potential downsides of more powerful AI is apparent elsewhere at the conference. A tutorial session hosted by Cornell and Berkeley professors in the cavernous main hall Monday focused on building fairness into machine-learning systems, a particular issue as. It included a reminder for researchers of legal barriers, such as the Civil Rights and Genetic Information Nondiscrimination acts.
One concern is that even when machine-learning systems are programmed to be blind to race or gender, for example, they may use other signals in data such as the location of a person’s home as a proxy for it. Some researchers are presenting techniques that could constrain or audit AI software.
On Thursday,, a researcher from Alphabet’s DeepMind research group, is scheduled to give a talk on “AI safety,” a relatively new strand of work concerned with preventing software developing, such as trying to avoid being switched off. Oxford University researchers planned to host an AI-safety themed lunch discussion earlier in the day.
Related Stories• • • • Krakovna’s talk is part of a one-day workshop dedicated to techniques for peering inside machine-learning systems to understand how they work—making them “interpretable,” in the jargon of the field. Many machine-learning systems are now essentially black boxes; their creators know they work, but can’t explain exactly why they make particular decisions. That will present more problems as startups and large companies apply machine learning in areas such as hiring and healthcare. “In domains like medicine we can’t have these models just be a black box where something goes in and you get something out but don’t know why,” says Maithra Raghu, a machine-learning researcher at Google. On Monday, she presented open-source software developed with colleagues that can reveal what a machine-learning program is paying attention to in data. It may ultimately allow a doctor to see what part of a scan or patient history led an AI assistant to make a particular diagnosis. Others in Long Beach hope to make the people building AI better reflect humanity.
Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called has run alongside NIPS for a decade. This Friday sees the first workshop, intended to create a dedicated space for people of color in the field to present their work., co-chair of NIPS, cofounder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals, and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups,” she says—meaning code that calls black people gorillas would be likely to reach the public.