"The more the robot was depicted as human -- and in particular the more feelings were attributed to the machine -- the less our experimental subjects were inclined to sacrifice it," says Paulus. "This result indicates that our study group attributed a certain moral status to the robot. One possible implication of this finding is that attempts to humanize robots should not go too far. Such efforts could come into conflict with their intended function -- to be of help to us."
They don't want tools. They don't want AI offspring. They just want slaves. That's a really stupid idea. >_<
February 12 2019, 06:00:32 UTC 2 years ago
Ahhh, self-awareness. Before one can say "then it is", one first has to define what it is, and that is not a simple question. Saying "moral relevance begins with self-awareness" is like saying "human life begins at conception": in one sense it's technically true, but in another it's a fuzzy statement of opinion, and in neither case is it very useful in determining right conduct.
"They don't want tools. They don't want AI offspring. They just want slaves. That's a really stupid idea."
No. The really stupid idea is trying to turn robots into pets, or children, or sex partners, or life companions. The whole point of robots is that they ARE tools, non-living artifacts that can do the work of a human, but who have no 'moral relevance'.
Humans are weird, and will bond emotionally with all kinds of non-living items - not just things like talking dolls or stuffed animals. I bonded with my robots when I was building them, for all that they were mindless little things with nothing even resembling a face, but I had no illusion or desire that they would bond with me, and - most importantly - no fear of hurting them; no possibility of doing wrong to them. I stand by what I said then: If it were possible to hurt them - if they had 'moral relevance' - it would be morally wrong to make them at all.