HINTS lab, UW Arguing with Robovie over the robot’s mistake while playing a game.
As militaries develop autonomous robotic warriors to replace humans on the battlefield, new ethical questions emerge. If a robot in combat has a hardware malfunction or programming glitch that causes it to kill civilians, do we blame the robot, or the humans who created and deployed it? -
- Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions. But UW psychologists are finding that people don't have such a clear-cut view of humanoid robots. The researchers' latest results show that humans apply a moderate amount of morality and other human characteristics to robots that are equipped with social capabilities and are capable of harming humans. In this case, the harm was financial, not life-threatening. But it still demonstrated how humans react to robot errors. The findings imply that as robots become more sophisticated and humanlike, the public may hold them morally accountable for causing harm.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.