开发者教机器人说“不”

研究人员为了让机器人NAO更加智能,通过更多的语言互动,教它学会说“不”。

 

Robots, just like humans, have to learn when to say “no.” If a request is impossible, would cause harm, or would distract from the task at hand, then it’s in the best interest of a ‘bot and his human alike for a diplomatic no, thanks to be part of the conversation.

就跟人类一样,机器人必须学会说不!如果给它一个不可能做到的任务指令,或者给机器人造成不必要的伤害,或是干扰它正在做的工作,那么机器人学会说不,无疑是对机器人和人类本身都有益的事情。

But when should a machine talk back? And under what conditions? Engineers are trying to figure out how to instill this sense of how and when to defy orders in humanoids.
但问题是,究竟是在什么时候,在什么条件下,机器人应该提出反对意见?工程师们正在尝试给机器人灌输这种常识。
Watch this Nao robot refuse to walk forward, knowing that doing so would cause him to fall off the edge of a table:
在视频里,机器人NAO知道再往前走就会从桌子上摔下去,所以他就拒绝继续往前走了。
Simple stuff, but certainly essential for acting as a check on human error. Researchers Gordon Briggs and Matthias Scheutz of Tufts University developed a complex algorithm that allows the robot to evaluate what a human has asked him to do, decide whether or not he should do it, and respond appropriately. The research was presented at a recent meeting of the Association for the Advancement of Artificial Intelligence.
这虽然看似简单,但却非常必要。因为人会发出错误的指令。塔夫斯大学研究人员Gordon Briggs 和Matthias Scheutz开发了一个复杂的算法,让机器人对人类的指令作出评估,然后机器人决定自己是否应该执行,并做出适当的反应。在最近的人工智能发展协会的会议上,这项研究做了展示。
The robot asks himself a series of questions related to whether the task is do-able. Do I know how to do it? Am I physically able to do it now? Am I normally physically able to do it? Am I able to do it right now? Am I obligated based on my social role to do it? Does it violate any normative principle to do it?
机器人问自己一系列的问题,有关任务是否是能够执行等。如我知道怎么做吗?我现在能做什么吗?我通常能做它吗?我现在能做了吗?我有义务根据我的社会角色来做吗?它违反了任何规范的原则吗?
The result is a robot that appears to be not only sensible but, one day, even wise.
结果就是,一个机器人,可能不仅是理智的,甚至有一天还是智慧的。

Notice how Nao changes his mind about walking forward after the human promises to catch him. It would be easy to envision a different scenario, where the robot says, “No way, why should I trust you?”
注意机器人Nao在人类做出承诺之后,它也改变了注意继续往前走了。想一下另一种可能的情景,机器人说:“我才不走呢,我为什么要相信你?“

But Nao is a social creature, by design. To please humans is in his DNA, so given the information that the human intends to catch him, he steps blindly forward into the abyss. Sure, if the human were to deceive his trust, he’d be hooped, but he trusts anyway. It is Nao’s way.
但机器人Nao的设计是一种社会化机器人。它“生来”就是要取悦人类的,所以当研究人员承诺要在它跌落前接住它的时候,它是相信人类的。哪怕万一人欺骗它会让它摔下去,它还是要相信人类。这就是机器人Nao的(善良)“本性”。

As robot companions become more sophisticated, robot engineers are going to have to grapple with these questions. The robots will have to make decisions not only on preserving their own safety, but on larger ethical questions. What if a human asks a robot to kill? To commit fraud? To destroy another robot?
随着机器人变得越来越复杂,机器人工程师需要去解决这些问题。机器人也不得不做出决定,不仅保护自己的安全,但在更大的道德问题。如果一个人让机器人随便杀人呢?去诈骗呢?或者是消灭另一个机器人呢?
The idea of machine ethics cannot be separated from artificial intelligence — even our driverless cars of the future will have to be engineered to make life-or-death choices on our behalf. That conversation will necessarily be more complex than just delivering marching orders.
人工智能不能没有机器人伦理常识,就包括无人驾驶汽车也是这样,这种常识甚至关系到我们未来的生死。到那时,人与机器人之间的对话就要比现在这个视频中的对话复杂多了。