티스토리 뷰
Experts including Elon Musk call for research to avoid AI 'pitfalls'
af334 2015. 1. 13. 22:08An open letter AI researchers warms of pitfalls ahead, and lays out a plan for avoiding them while improving the quality of artifical intelligence
More than 150 artificial intelligence researchers have signed an open letter calling for future research in the field to focus on maximising the social benefit of AI, rather than simply making it more capable.
The signatories, which include researchers from Oxford, Cambridge, MIT and Harvard as well as staff at Google, Amazom and IBM, celebrate progress in the field, but warn that "potential pitfalls" must be avoided
"The potential benefits [of AI research] are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable," the letter reads
"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls"
The group highlights a number of priorities for AI research which can help navigate the murky waters of the new technology.
In the short term, they argue that focus should fall on three areas: the economic effects of AI, the legal and ethical consequences, and the ability to guarantee that an AI is "robust", and will do what it is supposed to
"If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes,but 20,000 lawsuits," marking one potential legal pitfall. And the ethical considerations involved in using AI for surveillance and warfare are also noted
But in the long-term, the research should move away from the nitty-gritty, towards tackling more fundamental concerns presented by the field, the researchers argue - including trying to prevent the risk of a runaway super-intelligent machine
"It has been argued that very general and capable AI systems operating autonomously to accomplish some task will often be subject to effects that increase the difficulty of maintaining meaningful human control," they write. "Research on systems that are not subject to these effects, minimise their impact, or allow for reliable human control could be valuable in preventing undesired consequences, as could work on reliable and secure test-beds for AI systems at a variety of capability levels"
The letter is also signed by physicist Stephen Hawking and entrepreneur Elon Musk, who has been outspoken about his fear of super-intelligent AI in the past
"I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful," the space-flight and electric-car pioneer said in 2014. "I'm increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."
Alongside Musk's two major projects, SpaceX and Tesla Motors, he is also an early-stage investor in Vicarious, an AI research firm which aims to build a computer that can "think like a person," and DeepMind, the Google-owned AI research company. He made the investments, he has said, because he fears a "Terminator" -style outcome if AI research goes wrong.
he fears a Terminator -style outcome if AI research goes wrong
made the investments
which aims to build a computer
is also an early-stage investor in vicarious
alongside Musk's two major projects
do something very foolish
am increasingly inclined to think that there should be some regulatory oversight
the space-flight and electric pioneer said in 2014
had to guess at what our biggest existential threat is
has been outspoken about his fear
is also signed by physicist Stephen Hawking
the difficulty of maintaining meaningful human control
will often be subject to effets
has been argued
has been argued that very general and capable AI systems operating autonomously to accomplish some task will often be subject to effects that increase the difficulty of maintaining meaningful human control
trying to prevent the risk of a runaway super-intelligent machine
towards tackling more fundamental concerns presented by the field
should move away from the nitty-gritty
in the long-term
are also noted
the ethical considerations involved in using AI for surveillance and warfare are also noted
marking one potential legal pitfall
might get not 20,000 thank-you notes, nut 20,000 lawsuits
self-driving cars cut the roughly 40,000 annual US traffic fatalities in half
will do what it is supposed to
the ability to guarantee that an AI is robust
the legal and ethical consequences
should fall on three areas
in the short term
can help navigate the murky waters of the new technology
highlights a number of priorities for AI research
how to reap its benefits while avoiding potential pitfalls
the eradication of disease and poverty are not unfathomable
might achieve when this intelligence is magnified by the tools AI may provide
since everything that civilisation has to offer is a product of human intelligence
the potential benefits of AI research are huge
warn that potential pitfalls must be avoided
celebrate progress in the field
as well as staff at Google
the signatories, which include researchers from Oxford
rather than simply making it more capable
to focus on maximising the social benefit of AI
have signed an open letter calling for future research in the field
warms of
call for research to avoid AI pitfalls
'Articles' 카테고리의 다른 글
Bitcoin price plunge sparks new crash fears (0) | 2015.01.15 |
---|---|
Facebook at Work: social network launches 'pilot' for companies (0) | 2015.01.15 |
Phone interfaces in cars? Drivers don't need more distractions (0) | 2015.01.13 |
Concreate: solid, dependable, obstinate - and self-healing (0) | 2015.01.11 |
There are questions Siri won't answer - like are you plotting to destory us? (0) | 2015.01.11 |