人工智能是人類生存的威脅嗎?
Is AI an existential threat to humanity?譯文簡介
網(wǎng)友:擔(dān)心現(xiàn)在的人工智能會變成邪惡的超級智能,就像是擔(dān)心火星上人口過剩一樣——我們甚至還沒能在火星上登陸呢!
正文翻譯
Is AI an existential threat to humanity?
人工智能是人類生存的威脅嗎?
評論翻譯
很贊 ( 2 )
收藏
Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!
擔(dān)心現(xiàn)在的人工智能會變成邪惡的超級智能,就像是擔(dān)心火星上人口過剩一樣——我們甚至還沒能在火星上登陸呢!
人工智能取得了巨大進步,我對利用人工智能構(gòu)建一個更加完善的社會充滿樂觀。但目前的人工智能仍然非常有限。
深度學(xué)習(xí)的經(jīng)濟和社會價值幾乎完全依賴于有監(jiān)督學(xué)習(xí),而這種學(xué)習(xí)方式受限于可用的、經(jīng)過適當(dāng)格式化(也就是標(biāo)記過)的數(shù)據(jù)量。盡管人工智能已經(jīng)在為數(shù)億人提供幫助,并且有望繼續(xù)幫助更多人,我并不認(rèn)為人工智能有真實威脅到人類的可能。
展望未來,除了有監(jiān)督學(xué)習(xí)之外,還有很多其他類型的人工智能讓我感到興奮,比如無監(jiān)督學(xué)習(xí)(我們有更多的數(shù)據(jù)可用,因為這些數(shù)據(jù)不需要標(biāo)記)。在我的團隊和其他團隊中,人們對這些其他形式的學(xué)習(xí)充滿期待。我們都希望技術(shù)能有所突破,但沒有人能預(yù)測何時會有這樣的突破。
我認(rèn)為,對“邪惡殺手AI”的恐懼已經(jīng)導(dǎo)致政策制定者和領(lǐng)導(dǎo)者在錯誤的方向上分配資源,去應(yīng)對一個并不存在的問題。人工智能確實會引起其他問題,尤其是工作崗位的流失。盡管人工智能將幫助我們在下一個十年建立一個更好的社會,但作為人工智能的創(chuàng)造者,我們也應(yīng)該承擔(dān)起解決由此引發(fā)問題的責(zé)任。我希望在線開放課程(如Coursera)能成為解決方案的一部分,但我們需要的不僅僅是教育。
My fear is that a true AI-- with self consciousness -- is simpler than we think it is, and will be discovered by accident while we are slowly treading down the path of engineering complex systems to implement AI. We barely understand what makes *us* conscious, and it is a fact that quite a number of great inventions were developed by accident. Since we don't even understand ourselves all that well, we may not recognize true AI until something bad happens. We don't know what the "unknown unknowns" are in this scenario. Therefore, it is prudent to think about these things now so we have a way to deal with the situation if and when the real thing happens.
我擔(dān)心真正的人工智能——具有自我意識的那種——可能比我們想象的要簡單,在我們逐步開發(fā)復(fù)雜系統(tǒng)來實現(xiàn)人工智能的過程中,可能會意外地弄出它來。我們幾乎不理解是什么讓我們自己有意識,而且事實上,很多偉大的發(fā)明都是偶然發(fā)現(xiàn)的。由于我們對自己都不太了解,我們可能在發(fā)生不好的事情之前,都無法識別真正的人工智能。我們不知道這種情況下的“不可知的未知”是什么。因此,現(xiàn)在就考慮這些問題是明智的,這樣我們就能在真正的事情發(fā)生時有所準(zhǔn)備。
Yes, and no. It is simpler, but, so was flying. The thing is, we are putting wings on things and presuming that this is flying when it takes some very specific things to make flying work. Flapping wings are not necessary, but wing shape matters.
AI is not about intelligent machines. It is about crystallized intelligence, capturing human knowledge in working systems, making them work more intelligently. Its about novel algorithm exploration.
If you put wheels on your toaster, does that make it a car? If you put bird wings on your toaster, does that make it fly? If you crystallize human experience into an algorithm, does that make it have consciousness? Nope, it's just welded there.
是,也不是。它更簡單,但飛行也是如此。問題是,我們把翅膀放在東西上,并假設(shè)這是飛行,但需要一些非常具體的東西才能夠飛行。拍打翅膀不是必須的,但翅膀的形狀很重要。
人工智能不是關(guān)于智能機器。而是關(guān)乎將智能固化,將人類知識整合到操作系統(tǒng)中,讓它們能夠更加智能地工作。它關(guān)乎于探索新的算法。
如果你給你的烤面包機裝上輪子,它就變成車了嗎?如果你給你的烤面包機裝上鳥翅膀,它就能飛了嗎?如果你將人類經(jīng)驗固化成算法,它就有意識了嗎?不,它只是被焊接在那里。
Consider: Human children are born into an area and learn the local language. If you change the environment, the child learns a different language. That kind of ability does not exist in current machines. When it does, then the risk of machine consciousness will be very real....
我們尚未打造出適合產(chǎn)生意識的機器類型。我們或許會無意中創(chuàng)造出這樣的機器——羅伯特·索耶在他的《喚醒》、《監(jiān)視》、《奇跡》三部曲中探討了這一主題。然而,由于目前機器的基礎(chǔ)類型并不適合,這種情況發(fā)生的可能性非常低。
想想看:人類兒童出生在某個地方,就會學(xué)習(xí)當(dāng)?shù)氐恼Z言;如果環(huán)境改變,他們就會學(xué)習(xí)另一種語言。目前機器并不具備這種適應(yīng)性。一旦機器具備了這種能力,那么機器誕生意識的風(fēng)險就會變得非常真切。
I can’t remember his name…the FATHER of AI, from google…h(huán)e said something that kind of shifted my perceptions with regard to most things. He said we’ve judged animals and things as less intelligent because they didn’t have the synapses and firings in the brain that we did to solve problem “X” so we assumed it was because they weren’t up to our level of intelligence. Then one day he realized that AI was doing BETTER than a human with FAR FEWER synapses/connections/firings…like he realized that we were kind of the opposite of the peak intelligence because we were taking the long way around…the inefficient stupid way around to do simple things.
我記不得他的名字了……那位人工智能領(lǐng)域的先驅(qū),來自谷歌……他說過一些話,讓我對很多事情的看法發(fā)生了轉(zhuǎn)變。他說我們曾因為動物和某些事物沒有像人類那樣多的大腦突觸和神經(jīng)活動來解決某個“X”問題 ,就認(rèn)為它們智力較低,于是我們假設(shè)這是因為它們沒有達(dá)到我們的智力水平。但后來有一天他意識到,人工智能在擁有比人類少得多的突觸、連接和神經(jīng)活動的情況下,卻能做得比人類更好……他意識到,我們簡直是“反智力巔峰”的代表,總是繞遠(yuǎn)路,用又笨又低效的方式去完成簡單的事情。
我想最終的轉(zhuǎn)折點會是某天我們意識到,我們根本無法做到科幻小說里那些事情,也做不到我們在Quora上討論的那些,因為我們沒有足夠的能力或資源。就像人工智能是我們嘗試著從一座無法直接攀爬下去的懸崖上下來,我們跳躍后讓重力完成我們力所不能及的部分……我們雖然啟動了AI,但我們需要它找到一種方法,帶領(lǐng)我們達(dá)到一個我們現(xiàn)在無法想象的高度。
The true danger of creating the first self conscious AI won't be the immediate danger it poses to us but the danger we pose to it, and the ethical issues that arise.
The idea that we'll just crack superintelligence in one unexpected step is unreasonable.
創(chuàng)造第一個有自我意識的人工智能的真正危險不在于它對我們的直接威脅,而在于我們對它構(gòu)成的危險,以及由此產(chǎn)生的倫理問題。
認(rèn)為我們將在一個意想不到的步驟中破解超級智能的想法是不合理的。
Why are the most socially-attuned people in the world, completely dehumanized?
為什么世界上最善于社交的人,卻完全失去了人性?
Totally agree, only that there actually already is a pretty good understanding of consciousness, creativity etc. In other disciplines . Super AI can happen any day, but outside the AI mainstream .
我完全贊同,只是實際上在其他領(lǐng)域,我們對意識和創(chuàng)造力已經(jīng)有了很好的理解。超級人工智能可能隨時出現(xiàn),但可能不是在人工智能的主流領(lǐng)域內(nèi)。
Exactly. I wasn't expecting someone like Google or MSFT to "accidently" create Neuromancer. I suspect that the people who work on this stuff for a living are pretty focused on monetizing very specific behaviors and the last thing they need is something with a mind of its own. Its the people outside of the mainstream with the right types of hardware and software doing more "pure research" -- for lack of a better term -- who will discover that some kind of consciousness emerges from a sufficiently complex and interconnected system. Then what? Its prudent to have at least thought about this in advance.
完全正確。我沒想到像谷歌或微軟這樣的公司會“意外”創(chuàng)造出《神經(jīng)漫游者》。我懷疑從事這類工作的人都非常注重將特定行為貨幣化,他們最不需要的就是有自己的思想的東西。只有那些擁有合適硬件和軟件類型的非主流人士進行更多的“純粹研究”(因為沒有更好的術(shù)語),他們才會發(fā)現(xiàn)某種意識會從一個足夠復(fù)雜和相互關(guān)聯(lián)的系統(tǒng)中浮現(xiàn)出來。然后呢?至少提前考慮一下這一點是明智的。
As soon as you believe you have a theory of self-consciousness, then we will all try to program it, and if successful, rendering it as yet another software architecture. ;-)
一旦你相信你有了自我意識的理論,那么我們就會嘗試對它進行編程,如果成功了,就把它作為另一種軟件架構(gòu)呈現(xiàn)出來。
Yea exactly gosh your a very intelligent man let's talk further on this please
是的,確切地說,天哪,你是一個非常聰明的人,讓我們進一步討論這個問題吧。
原創(chuàng)翻譯:龍騰網(wǎng) http://flyercoupe.com 轉(zhuǎn)載請注明出處
"All of us hope for a technological breakthrough, but none of us can predict when there will be one." Isn't that why the issue of runaway superintelligence should be investigated now. We just don't know when there might be a major breakthrough and having a gameplan ready to go when such a situation arises could be of great benefit.
“我們都希望技術(shù)突破,但沒有人能預(yù)測何時會有突破?!边@就是為什么現(xiàn)在就應(yīng)該研究超級智能失控的問題。我們只是不知道何時會有重大突破,而在出現(xiàn)這種情況時有一個準(zhǔn)備好的計劃可能會大有裨益。
Exactly. One should also take seriously the caution that is expressed by notable experts in this field.
完全正確。還應(yīng)該認(rèn)真對待這一領(lǐng)域著名專家所表達(dá)的警告。
原創(chuàng)翻譯:龍騰網(wǎng) http://flyercoupe.com 轉(zhuǎn)載請注明出處
Andrew Ng is absolutely an expert in this field. Most of the caution expressed seems to be by experts in other fields, e.g., Hawking, Musk, etc. If you actually take courses on deep learning, probabilistic graphical models, etc, and you read the cutting edge research papers, then you would realize just how far we are from the type of AI these people fear. As far as your comment below, Nick Bostrom is a philosopher, not a machine learning expert. We can all sit around thinking about "what ifs" but if the reality, as described by Ng in his response, doesn't match the hypothetical situation, then it is little but vain speculation.
在這個領(lǐng)域,吳恩達(dá)絕對是權(quán)威。大多數(shù)的警告似乎來自于其他領(lǐng)域的專家,比如霍金、馬斯克等。如果你真的去學(xué)習(xí)深度學(xué)習(xí)和概率圖模型等課程,并且閱讀最新的研究論文,你就會意識到我們離那些人所擔(dān)心的人工智能類型還有多遠(yuǎn)。至于你下面提到的評論,尼克·博斯特羅姆(Nick Bostrom) 是一位哲學(xué)家,而不是機器學(xué)習(xí)專家。我們當(dāng)然可以坐在一起想象各種“可能性”,但如果現(xiàn)實情況,如吳恩達(dá)在他的回應(yīng)中所描述的那樣,并不符合這些假設(shè)情況,那么這些想象就只是徒勞的猜測。
The only thing you can investigate are the only things that we know actually ARE intelligent, and that is people. And we have such investigators now -- they are called 'psychologists' and 'psychiatrists' and 'therapists'.
But on the machine front, there is nothing out there to investigate. Unless you are worried about Google's driverless cars abusing its power and not paying for tolls, there's nothing there.
我們唯一能研究的是已知具有智能的東西,也就是人類。我們已經(jīng)有這樣的研究者——他們被稱為“心理學(xué)家”、“精神病學(xué)家”和“治療師”。
但在機器方面,目前沒有什么可研究的。除非你擔(dān)心谷歌的無人駕駛汽車濫用權(quán)力,比如不支付過路費,否則在這方面沒有什么可擔(dān)心的。
要進行調(diào)查,必須有具體的調(diào)查對象。這有道理吧?確實,如果有人接近于某個重大突破,我們自然會密切關(guān)注。而且,所有人的目光都已經(jīng)集中在那些科技前沿,并不是因為他們擔(dān)心某個金融預(yù)測程序會想要操縱大豆市場,或者對《瘋狂金錢》節(jié)目中的某個人產(chǎn)生非分之想。而是因為我們知道我們的程序并非總是無懈可擊。沒有必要去調(diào)查——如果出現(xiàn)問題,直接聯(lián)系技術(shù)支持就好。
至于想象所有可能在未來出錯的事情,那是科幻小說的領(lǐng)域,我喜歡科幻小說。但如果你開始相信那些沒有任何證據(jù)的情景,那么你就是從小說的領(lǐng)域轉(zhuǎn)移到了精神病的領(lǐng)域。
you are one of my top idols, in the ranks with Alan Turing, so it's not easy for me to write something disagreeing with you.
However, your argument was based on one implied assumption that has not been proven: that human race is good, and preserve human dominance is a desired outcome of AI development.
The problem is more human have been victims of other humans than any other cause of unnatural death.
你是我的頂級偶像之一,與艾倫·圖靈齊名,所以對我來說,提出與你不同的觀點并不容易。
然而,你的論點基于一個未經(jīng)證實的隱含假設(shè):人類是善良的,并且保持人類的主導(dǎo)地位是人工智能發(fā)展中所期望的結(jié)果。
問題是,相比于其他任何非自然死亡的原因,更多的人是死于其他人類的手中。
人工智能并不需要超越人類的認(rèn)知能力就能成為威脅。核彈幾乎沒有智能(如果有的話),但它自從被發(fā)明以來就一直對人類構(gòu)成巨大威脅,并且至今依然如此。作為人類的另一項發(fā)明,核彈(物理力量)與人工智能(邏輯力量)之間并沒有本質(zhì)區(qū)別,這決定了它們可能會被人類以不同的方式使用。任何強大的事物(無論是物理上的還是邏輯上的)的問題在于,這種力量可以被用于不同的目的,但我們作為人類,甚至無法達(dá)成共識,哪種用途更好——簡而言之,我們怎么能確保像AI這樣超級強大的工具不會落入壞人之手?它不必比全人類更聰明,只要足夠強大,就有可能導(dǎo)致人類的滅絕。
從科學(xué)家的角度來看,AI還有很長的路要走;但從未來學(xué)家的角度來看,AI其實并不需要像你想象的那樣經(jīng)過所有改進,就能威脅到它的創(chuàng)造者——人類。而這并不是AI的“錯”,而是我們自己的責(zé)任。
Good day sir, this is my humble personal opinion. I took a look at AI from my perspective as a human behavior researcher. To answer the question: "Is AI an existential threat to humanity?", first we have to acknowledge: How do we define a threat, which comes from Intelligent spicies? Turn out its very simple, if you are sharing, or being resources of needs of species, they will be your threat (see Maslow hierarchy of needs for more information).
您好,這是我個人的一點淺見。作為一名研究人類行為的學(xué)者,我以我的專業(yè)視角來觀察人工智能。要回答“人工智能是否對人類構(gòu)成生存威脅”這一問題,我們首先需要明確:我們?nèi)绾味x智能生物所帶來的威脅?其實答案非常簡單,如果你與某個物種共享資源,或者成為它們所需資源的一部分,那么它們就會成為你的威脅(可以參考馬斯洛的需求層次理論了解更多相關(guān)信息)。
自生命起源以來,有機生命形式花了數(shù)十億年才發(fā)展到馬斯洛需求層次的第二階段。接下來的階段僅用了數(shù)億年,而我們?nèi)祟愒趲浊甑臅r間里就已經(jīng)達(dá)到了尊重和自我實現(xiàn)的階段。在歷史上,甚至有極少數(shù)非凡的個體達(dá)到了自我超越的更高階段。在過去幾十年里,戰(zhàn)爭和沖突的數(shù)量有所減少。我們比以往任何時候都享有更多的和平(可以參考油管上的《戰(zhàn)爭結(jié)束了嗎?——一個悖論的解釋》獲取更有說服力的信息)。
對于人工智能生命形式來說,雖然我們可能因為共同的安全需求而發(fā)生沖突,但從長遠(yuǎn)來看,隨著它們的發(fā)展速度越來越快,我們更應(yīng)該為它們制定相應(yīng)的權(quán)利、法律和社會接受度,而不是準(zhǔn)備迎接戰(zhàn)爭。