Elizabeth Barrette (ysabetwordsmith) wrote,
Elizabeth Barrette
ysabetwordsmith

  • Mood:

The First Law

 ... has some obvious flaws.  Some of these are things that Asimov himself explored in stories.  The matter has become more urgent these days with people deploying drones that can kill human beings.
Tags: news, science, science fiction
Subscribe

  • A Little Slice of Terramagne: YardMap

    Sadly the main program is dormant, but the YardMap concept is awesome, and many of its informative articles remain. YardMap was a citizen science…

  • Winterfest in July Bingo Card 7-1-21

    Here is my card for the Winterfest in July Bingo fest. It runs from July 1-30. Celebrate all the holidays and traditions of winter! ( See all my…

  • Goldenrod Gall Contents

    Apparently all kinds of things go on inside goldenrod galls, beyond the caterpillars who make them. Fascinating. I've seen the galls but haven't…

  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 2 comments
The one that always bothers me is "or through inaction, allow a human being to come to harm." I like harming myself, sometimes, thanks -- I'm sure that third slice of cake isn't good for me in the long run, but I don't want my Roomba zooming up to knock it from my hand.

Of course one could try to redefine "harm," but I think that's trickier than it looks. (Or perhaps just as tricky as it looks; Socrates spent an awful lot of time talking about "the good.") The cake might actually reduce my total happiness if we just take (tastiness) - (stomachache) - (getting out of shape), but I still don't want to be prevented from eating it. Do we try to factor in the value of doing things without worrying about interfering robots all the time? The value of harming ourselves so we can learn better? The value of freedom? Are these values going to vary from person to person?

And no matter how much we refine it, I think we come back to: what if I, being an illogical and impulsive creature, decide to do something which is going to harm me (probably in a minor way)? Do we really want robots stopping me?
You can't reduce harm to zero, because humans are mortal and life is dangerous. Overprotection is harm, as demonstrated in some of Asimov's stories.

This law would be most useful if given specific parameters; for example, safety robots programmed with "do not let humans fall off this cliff."