티스토리 뷰

An open letter AI researchers warms of pitfalls ahead, and lays out a plan for avoiding them while improving the quality of artifical intelligence


More than 150 artificial intelligence researchers have signed an open letter calling for future research in the field to focus on maximising the social benefit of AI, rather than simply making it more capable.


The signatories, which include researchers from Oxford, Cambridge, MIT and Harvard as well as staff at Google, Amazom and IBM, celebrate progress in the field, but warn that "potential pitfalls" must be avoided


"The potential benefits [of AI research] are huge, since everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable," the letter reads


"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls"


The group highlights a number of priorities for AI research which can help navigate the murky waters of the new technology.


In the short term, they argue that focus should fall on three areas: the economic effects of AI, the legal and ethical consequences, and the ability to guarantee that an AI is "robust", and will do what it is supposed to


"If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes,but 20,000 lawsuits," marking one potential legal pitfall. And the ethical considerations involved in using AI for surveillance and warfare are also noted


But in the long-term, the research should move away from the nitty-gritty, towards tackling more fundamental concerns presented by the field, the researchers argue - including trying to prevent the risk of a runaway super-intelligent machine


"It has been argued that very general and capable AI systems operating autonomously to accomplish some task will often be subject to effects that increase the difficulty of maintaining meaningful human control," they write. "Research on systems that are not subject to these effects, minimise their impact, or allow for reliable human control could be valuable in preventing undesired consequences, as could work on reliable and secure test-beds for AI systems at a variety of capability levels"


The letter is also signed by physicist Stephen Hawking and entrepreneur Elon Musk, who has been outspoken about his fear of super-intelligent AI in the past


"I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it's probably that. So we need to be very careful," the space-flight and electric-car pioneer said in 2014. "I'm increasingly  inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don't do something very foolish."


Alongside Musk's two major projects, SpaceX and Tesla Motors, he is also an early-stage investor in Vicarious, an AI research firm which aims to build a computer that can "think like a person," and DeepMind, the Google-owned AI research company. He made the investments, he has said, because he fears a "Terminator" -style outcome if AI research goes wrong.




he fears a Terminator -style outcome if AI research goes wrong

made the investments

which aims to build a computer

is also an early-stage investor in vicarious

alongside Musk's two major projects

do something very foolish

am increasingly inclined to think that there should be some regulatory oversight

the space-flight and electric pioneer said in 2014

had to guess at what our biggest existential threat is

has been outspoken about his fear

is also signed by physicist Stephen Hawking

the difficulty of maintaining meaningful human control

will often be subject to effets

has been argued

has been argued that very general and capable AI systems operating autonomously to accomplish some task will often be subject to effects that increase the difficulty of maintaining meaningful human control

trying to prevent the risk of a runaway super-intelligent machine

towards tackling more fundamental concerns presented by the field

should move away from the nitty-gritty

in the long-term

are also noted

the ethical considerations involved in using AI for surveillance and warfare are also noted

marking one potential legal pitfall

might get not 20,000 thank-you notes, nut 20,000 lawsuits

self-driving cars cut the roughly 40,000 annual US traffic fatalities in half

will do what it is supposed to

the ability to guarantee that an AI is robust

the legal and ethical consequences

should fall on three areas

in the short term

can help navigate the murky waters of the new technology

highlights a number of priorities for AI research

how to reap its benefits while avoiding potential pitfalls

the eradication of disease and poverty are not unfathomable

might achieve when this intelligence is magnified by the tools AI may provide

since everything that civilisation has to offer is a product of human intelligence

the potential benefits of AI research are huge

warn that potential pitfalls must be avoided

celebrate progress in the field

as well as staff at Google

the signatories, which include researchers from Oxford

rather than simply making it more capable

to focus on maximising the social benefit of AI

have signed an open letter calling for future research in the field

warms of

call for research to avoid AI pitfalls

댓글
반응형
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
TAG
more
«   2024/11   »
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30
글 보관함