ysabetwordsmith 😦busy

Robots and Moral Choices

This article explores whether people feel that moral rules should apply to robots. Now if the robot is simply a machine, it is not morally relevant; but if it is self-aware, then it is. The study started out showing surprisingly good results: the more a robot was presented as personlike, the more humans were inclined to protect it. This is great! Maybe we're not Matrix murderers after all. That would be awesome. Except then the researchers drew this conclusion:

"The more the robot was depicted as human -- and in particular the more feelings were attributed to the machine -- the less our experimental subjects were inclined to sacrifice it," says Paulus. "This result indicates that our study group attributed a certain moral status to the robot. One possible implication of this finding is that attempts to humanize robots should not go too far. Such efforts could come into conflict with their intended function -- to be of help to us."

They don't want tools.  They don't want AI offspring.  They just want slaves.  That's a really stupid idea.  >_<