智能化战争
Musk, Hawking, Wozniak: Ban AI warfare, autonomous weapons 马斯克,霍金,沃兹尼亚克公开声明: 禁止人工智能战争,禁止自主武器使用 春雨 译 By Bill Howard on July 27, 2015 at 4:40 pm More than a thousand researchers, AI experts, and high-profile business leaders say war is getting out of hand and we should ban “offensive autonomous weapons,” lest the world powers wind up in a“military artificial intelligence arms race.” They would ban AI development for warfare and autonomous weapons that decide who, what, where, and when to fire. They’d draw the line so as to allow remotely operated devices under human control, however, such as drones are now. 一千多名研究人员、人工智能专家和知名商界领袖表示,战争正在失控,我们应该禁止“进攻性自主武器”,以免世界强国陷入“军事人工智能军备竞赛”。他们将禁止战争和自主武器的人工智能发展,因为这些武器能自己决定着由谁、什么火力、向哪里、什么时候开火。他们认为 应该给这些武器划条界限,保证让远程操作的设备在人类控制之下,比如无人机现在就是由人来操控的。 The signatories include Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, professor Stephen Hawking, Google DeepMind CEO Demis Hassabis, and about 1,000 others. The letter will be presented at theInternational Joint Conference on Artificial Intelligence Wednesday in Buenos Aires, according to the Guardian, which first reported the story. 签名者包括特斯拉首席执行官埃隆·马斯克、苹果联合创始人史蒂夫·沃兹尼亚克、斯蒂芬·霍金教授、谷歌DeepMind首席执行官迪米斯·哈萨比斯以及其他约1000人。据“卫报”报道,这封信将于周三在布宜诺斯艾利斯举行的人工智能国际联合会议上发表。 AI as the third deadly revolution in warfare According to the letter,“AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.” 人工智能是战争中的第三次致命革命 在公开信中声明,“人工智能技术已经发展到了这样一个阶段-即使不是合法的,也实际上在数年内,而不是几十年,是可行的,而且风险很大:自主武器被描述为继火药和核武器之后的第三次战争革命。” On the one hand, they say, artificial intelligence makes the battlefield safer. On the other, it lowers the risk of going to war, especially for the side that strikes first or has more and better AI weaponry. 一方面,他们说,人工智能使战场更加安全。另一方面,它降低了开战的风险,特别是对先攻一方或拥有更多、更好的人工智能武器的一方。 Beyond gunpowder and nukes, there have been other big leaps in technology that gave one side an advantage: the machine gun (Gatling Gun) of 1862, poison gas and tanks in World War I, massive aerial bombardment of cities in the 1930s (taking war beyond the front line and to the civilian population), and potentially biological agents. Ironically, Richard Gatling, inventor of theeponymous weapon, was quoted as believing its efficiency would reduce the size of armies and thus the total amount of deaths and suffering. The only way it reduced the size of armies was after a battalion charged the guns. 除了火药和核弹之外,技术上还有其他重大的飞跃,给一方带来了优势:1862年的机关枪(加特林机关枪)、一战中的毒气和坦克、20世纪30年代对城市的大规模空中轰炸(将战争带入前线之外,包括平民在内),以及潜在的生物制剂。具有讽刺意味的是,因发明加特林机关枪成名的发明者理查德·加特林却认为,加特林机关枪的效率将减少军队的规模,从而减少死亡和痛苦的总数。唯一减少军队规模的办法是在一个营装备机关枪之后。 Some new weapons have been banned or sidelined. Since 1995, blinding lasers have been outlawed. 一些新武器已被禁止或搁置一旁。自1995年以来,致盲激光被宣布为非法。 Differences among the signers 签字人之间的差异 There is general agreement that an AI/robotic arms race is bad, especially since they make their own decisions, which could lead to the escalation of fighting since both sides can toss more materiel at each other. There are also differences: Hawking and Musk have said, “[AI]biggest existential threat …. [full AI might] spell the end of the human race.” Wozniak on the other hand makes anorthogonal point: Robots can be good for people. They might become akin to the “family pet … taken care of all the time.” If so, Sony needs to bring back Aibo quick. 人们普遍认为,人工智能/机器人军备竞赛是不好的,特别是因为他们自己能作出决定,这可能导致战斗升级,因为双方可以向对方投掷更多的物资。也有不同之处:霍金和马斯克曾说过:“[人工智能]存在的最大威胁…..[完全的人工智能]导致人类的终结。“另一方面,沃兹尼亚克提出了一个正交的观点:机器人可以为人类造福。他们可能会变成“家庭宠物…”时时刻刻都需要照顾。“如果是这样的话,索尼需要尽快召回艾博(欧宝)机器玩具。 Generally, the 1,000-plus signatories appear to see a difference between hands-off autonomous weaponry using AI decision-making, and devices such as drones that operate without humans aboard, but controlled from afar (sometimes back in heartland America) by human operators who decide when to push the button. 一般说来,1000多个签字国似乎看到了使用人工智能决策的自主武器和那些能在遥远的地方(有时是在美国的心脏地带)进行控制的武器装备如无人驾驶的无人机之间的区别,这些武器装备是由操作人员决定何时按下按钮。 Already considered by the UN 联合国已开始关注人工智能 In April, a United Nations conference meeting in Geneva discussed futuristic weapons, including killer robots. Some world powerhouses were opposed to limits or bans. The UK, for instance, was in opposition because it wasn’t necessary. According to The Guardian, theUK Foreign Office said, “[We]we do not see the need for a prohibition on the use of laws, as international humanitarian law already provides sufficient regulation for this area.” 今年4月,在日内瓦举行的一次联合国会议讨论了未来主义武器,包括杀人机器人。一些世界强国反对限制或禁令。例如,英国就持反对立场,认为没有必要。据“卫报”报道,英国外交部表示,“我们不认为有必要为人工智能武器的使用制定禁止法律,因为国际人道主义法已经对这一领域作出了充分的规定。” Right now, advantage accrues to the major powers with big budgets. Over time, smaller countries or rogues-without-states could buy or adapt robots and AI to their own purposes. Unlike work on nukes or chemical weapons, it might be easier to mask their work into AI warfare. 现在,拥有庞大预算的大国获得了优势。随着时间的推移,较小的国家或流氓的国家或组织也可以购买或改造机器人,使人工智能实现他们自己的目的。与核武器或化学武器不同,人工智能可能更容易掩盖他们的工作进入到人工智能战争。 This open letter was announced July 28 at the opening of the IJCAI 2015 conference on July 28. Journalists who wish to see the press release may contact Toby Walsh. Hosting, signature verification and list management are supported by FLI; for administrative questions about this letter, please contact Max Tegmark. 这封公开信于7月28日7月28日IJCAI(International Joint Conference on Artificial Intelligence国际人工智能联合大会) 2015大会开幕时宣布。 希望看到新闻稿的记者可以联系Toby Walsh。 FLI支持托管,签名验证和列表管理;有关此信的行政问题,请联系Max Tegmark。