英語(yǔ)閱讀 學(xué)英語(yǔ),練聽(tīng)力,上聽(tīng)力課堂! 注冊(cè) 登錄
> 輕松閱讀 > 科學(xué)前沿 >  內(nèi)容

人工智能的失控風(fēng)險(xiǎn)

所屬教程:科學(xué)前沿

瀏覽:

2017年08月17日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
If I were to approach you brandishing a cattle prod, you might at first be amused. But, if I continued my advance with a fixed maniacal grin, you would probably retreat in shock, bewilderment and anger. As electrode meets flesh, I would expect a violent recoil plus expletives.

如果我揮舞著趕牛棒朝你走去,你剛開始可能會(huì)覺(jué)得搞笑。但如果我臉上掛著僵硬而癲狂的笑容朝你越走越近,你很可能會(huì)嚇得后退,心中充滿震驚、困惑和憤怒。當(dāng)電極觸碰到肉體,可以想見(jiàn),你會(huì)猛地往后縮,嘴上咒罵連連。

Given a particular input, one can often predict how a person will respond. That is not the case for the most intelligent machines in our midst. The creators of AlphaGo — a computer program built by Google’s DeepMind that decisively beat the world’s finest human player of the board game Go — admitted they could not have divined its winning moves. This unpredictability, also seen in the Facebook chatbots that were shut down after developing their own language, has stirred disquiet in the field of artificial intelligence.

當(dāng)你對(duì)一個(gè)人做了什么事,對(duì)方會(huì)做出什么反應(yīng)往往是可以預(yù)測(cè)的。當(dāng)今最智能的機(jī)器卻不是這樣。AlphaGo是谷歌(Google)下屬的DeepMind開發(fā)出的計(jì)算機(jī)程序,它在圍棋對(duì)弈中擊敗世界最頂尖的棋手。但AlphaGo的創(chuàng)造者們承認(rèn),他們無(wú)法推測(cè)它會(huì)走出什么勝招。這種不可預(yù)測(cè)性——同樣可見(jiàn)于Facebook的兩個(gè)聊天機(jī)器人,它們因?yàn)榘l(fā)展出自己的語(yǔ)言而被關(guān)閉了——引起了人工智能界的不安。

As we head into the age of autonomous systems, when we abdicate more decision-making to AI, technologists are urging deeper understanding of the mysterious zone between input and output. At a conference held at Surrey University last month, a team of coders from Bath University presented a paper revealing how even “designers have difficulty decoding the behaviour of their own robots simply by observing them”.

隨著我們邁入自主系統(tǒng)的時(shí)代、把更多決策工作交給人工智能,技術(shù)專家們開始敦促我們要更深入地理解“輸入”(input)和“輸出”(output)之間的神秘地帶。上月在薩里大學(xué)(Surrey University)舉辦的一次會(huì)議上,來(lái)自巴斯大學(xué)(Bath University)的一個(gè)編程團(tuán)隊(duì)提交了一篇論文,透露就連“設(shè)計(jì)者,要僅憑觀察來(lái)破譯他們所研發(fā)的機(jī)器人的行為也有困難”。

The Bath researchers are championing the concept of “robot transparency” as an ethical requirement: users should be able to easily discern the intent and abilities of a machine. And when things go wrong — if, say, a driverless car mows down a pedestrian — a record of the car’s decisions should be accessible so that similar errors can be coded out.

巴斯大學(xué)的研究人員倡導(dǎo)把“機(jī)器人透明”列為一項(xiàng)倫理要求:用戶應(yīng)該能夠輕易辨識(shí)一部機(jī)器的意圖和能力。而在出事之后,比如說(shuō)一輛無(wú)人駕駛汽車撞倒了一名行人,人們應(yīng)該能夠獲得關(guān)于該汽車所做決定的記錄,以便通過(guò)修改代碼來(lái)根除類似的錯(cuò)誤。

Other roboticists, notably Professor Alan Winfield of Bristol Robotics Laboratory at the University of the West of England, have similarly called for “ethical black boxes” to be installed in robots and autonomous systems, to enhance public trust and accountability. These would work in exactly the same way as flight data recorders on aircraft: furnishing the sequence of decisions and actions that precede a failure.

其他機(jī)器人專家也呼吁在機(jī)器人和自動(dòng)化系統(tǒng)上安裝“道德黑匣子”,以增強(qiáng)公眾信任,也有助于追究責(zé)任。其中最有名的要數(shù)西英格蘭大學(xué)(University of the West of England)布里斯托機(jī)器人實(shí)驗(yàn)室(Bristol Robotics Laboratory)的艾倫•溫菲爾德教授(Alan Winfield)。這種黑匣子的作用就像飛機(jī)上記錄飛行數(shù)據(jù)的黑匣子:記錄機(jī)器失靈前做了哪些決定和行為。

Many autonomous systems, of course, are unseen: they lurk behind screens. Machine-learning algorithms, grinding mountains of data, can affect our success at securing loans and mortgages, at landing job interviews, and even at being granted parole.

當(dāng)然,很多自主系統(tǒng)是看不見(jiàn)的:它們隱藏在屏幕背后。處理海量數(shù)據(jù)的機(jī)器學(xué)習(xí)算法對(duì)于我們能否申請(qǐng)到貸款、能否獲得面試機(jī)會(huì),甚至能否獲得假釋都會(huì)有影響。

For that reason, says Sandra Wachter, a researcher in data ethics at Oxford university and the Alan Turing Institute, regulation should be discussed. While algorithms can correct for some biases, many are trained on already-skewed data. So a recruitment algorithm for management is likely to identify ideal candidates as male, white and middle-aged. “I am a woman in my early 30s,” she told Science, “so I would be filtered out immediately, even if I’m suitable . . . [and] sometimes algorithms are used to display job ads, so I wouldn’t even see the position is available.”

出于這個(gè)原因,牛津大學(xué)(Oxford University)和圖靈研究所(Alan Turing Institute)的數(shù)據(jù)倫理學(xué)研究員桑德拉•沃奇特(Sandra Wachter)認(rèn)為,我們應(yīng)當(dāng)討論制定相應(yīng)的監(jiān)管條例。雖然算法可以糾正某些偏見(jiàn),但很多算法是以本來(lái)就扭曲的數(shù)據(jù)訓(xùn)練出來(lái)的。比如一款招聘管理人員的算法可能把理想的應(yīng)聘人選列為白人中年男性。“我是個(gè)30歲出頭的女人,”她告訴《科學(xué)》(Science)雜志,“所以我會(huì)立刻被過(guò)濾掉,即使我是合適的人選……有時(shí)候算法還會(huì)被用來(lái)顯示招聘廣告,所以我甚至看不到這個(gè)職位信息。”

The EU General Data Protection Regulation, due to come into force in May 2018, will offer the prospect of redress: individuals will be able to contest completely automated decisions that have legal or other serious consequences.

將于2018年5月生效的《歐盟一般數(shù)據(jù)保護(hù)條例》(EU General Data Protection Regulation)將提供糾正的空間:那些完全自動(dòng)做出的決定如果造成法律或其他方面的嚴(yán)重后果,個(gè)人將能夠提出異議。

There is an existential reason for grasping precisely how data input becomes machine output — “the singularity”. This is the much-theorised point of runaway AI, when machine intelligence surpasses that of human creators. Machines could conceivably acquire the ability to shape and control the future on their own terms.

對(duì)于我們?yōu)槭裁葱枰莆諗?shù)據(jù)輸入是如何變成機(jī)器輸出的,有一個(gè)關(guān)乎人類生死存亡的理由,那就是“奇點(diǎn)”(singularity)。人們對(duì)奇點(diǎn)的概念進(jìn)行了很多理論分析,它是指當(dāng)機(jī)器的智能超過(guò)人類創(chuàng)造者的智力,人工智能失控的那個(gè)臨界點(diǎn)。可以想象,機(jī)器有可能獲得自做主張地塑造和控制未來(lái)的能力。

There need not be any premeditated malice for such a leap — only a lack of human oversight as AI programs, equipped with an ever-greater propensity to learn and the corresponding autonomy to act, begin to do things that we can no longer predict, understand or control. The development of AlphaGo suggests that machine learning has already mastered unpredictability, if only at one task. The singularity, should it materialise, promises a rather more chilling version of Game Over.

發(fā)生這樣的劇變不需要任何蓄謀的惡意;只需要當(dāng)人工智能程序具備越來(lái)越強(qiáng)烈的學(xué)習(xí)傾向和相應(yīng)的自主行動(dòng)能力,開始做一些我們?cè)僖矡o(wú)法預(yù)測(cè)、理解或控制的事情時(shí),人類卻對(duì)它們?nèi)狈ΡO(jiān)督。AlphaGo的開發(fā)似乎表明,機(jī)器學(xué)習(xí)已經(jīng)造就了掌控不可預(yù)測(cè)性的高手,即便只是針對(duì)一種任務(wù)。倘若奇點(diǎn)真的出現(xiàn)了,Game Over可能會(huì)更加恐怖。
 


用戶搜索

瘋狂英語(yǔ) 英語(yǔ)語(yǔ)法 新概念英語(yǔ) 走遍美國(guó) 四級(jí)聽(tīng)力 英語(yǔ)音標(biāo) 英語(yǔ)入門 發(fā)音 美語(yǔ) 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思成都市今日花園熙苑英語(yǔ)學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦