Elizabeth Barrette (ysabetwordsmith) wrote,
Elizabeth Barrette
ysabetwordsmith

  • Mood:

Robot Feelings

Here's an overview article about robot feelings, mostly criticizing what other people are saying about them. The truth lies somewhere in between.


So Man and Damasio propose a strategy for imbuing machines (such as robots or humanlike androids) with the “artificial equivalent of feeling.” At its core, this proposal calls for machines designed to observe the biological principle of homeostasis. That’s the idea that life must regulate itself to remain within a narrow range of suitable conditions — like keeping temperature and chemical balances within the limits of viability. An intelligent machine’s awareness of analogous features of its internal state would amount to the robotic version of feelings.

That's not a bad idea. First, it's a premise common across as much of life as is capable of responding to its environment, which is most of it. Second, if effective it would make robots more alert and durable. Third, they're thinking from the bottom up, aiming to create the building blocks of awareness and feelings, not trying to code those from the top down.


"A robot capable of perceiving existential risks might learn to devise novel methods for its protection, instead of relying on preprogrammed solutions.

“Rather than having to hard-code a robot for every eventuality or equip it with a limited set of behavioral policies, a robot concerned with its own survival might creatively solve the challenges that it encounters,” Man and Damasio suspect. “Basic goals and values would be organically discovered, rather than being extrinsically designed."


Those are good goals. In fact, we can already program robots to solve problems with minimal input from programmers -- for example, letting it learn to walk instead of programming it to walk.


Wait. The robot does not really exist as a unified self in the sense that a dog does. Only a conscious, unified self can experience an existential threat to survival, such as serious pain.

Really? Does an amoeba have a "conscious, unified self" when it writhes away from something trying to eat it, or a place that is too warm or too cold? Does a plant, when it responds to insect pests by pouring out chemicals that taste bad? And let's not forget that some plants send chemical messages to their neighbors screaming "Aphid attack! To arms! To arms!" and suddenly all the others who haven't been bitten yet also taste worse. What is alive goes a lot farther down than what has a sense of self, although we are learning that a sense of self also goes a lot farther down than humans used to think it did.

Looking at the most basic levels and features of life is actually a very good way to build artificial intelligence, which has to start with artificial life in the first place.


Devising novel self-protection capabilities might also lead to enhanced thinking skills. Man and Damasio believe advanced human thought may have developed in that way: Maintaining viable internal states (homeostasis) required the evolution of better brain power. “We regard high-level cognition as an outgrowth of resources that originated to solve the ancient biological problem of homeostasis,” Man and Damasio write.

Well, yes. Life tends toward greater complexity. Bigger brains are one way to gain extra survival advantage. Denser brains will do it too, which is why some birds are so smart; watch these keas solve problems. But they're not the only ways. Social insects have tiny individual brains, yet they work together so the colony as a whole acts like a much larger, more complex organism that can be quite powerful -- accomplishing impressive feats such as making a bridge across a stream or driving away a bear.

If we define life as a dynamic process of resource-gathering, harm-avoiding, and problem-solving then making artificial life is not terrifically difficult.


No, actually. Homeostasis can be maintained among life forms that have very limited individual thinking skills (termites come to mind). If we humans didn’t have the type of minds we do, we would still have homeostasis; we just wouldn’t do calculus or write screenplays. For homeostasis, our robot needs merely to be alive, not especially clever.

The quest for homeostasis isn't cleverness unto itself, but it is included in all forms of cleverness that we can observe in the available biosphere. Therefore, including it in our efforts to produce artificial life is logical and should help move things along toward the goal of smarter robots.


A robot with a sense of touch may one day “feel” pain, both its own physical pain and empathy for the pain of its human companions. Such touchy-feely robots are still far off, but advances in robotic touch-sensing are bringing that possibility closer to reality.

Sensors embedded in soft, artificial skin that can detect both a gentle touch and a painful thump have been hooked up to a robot that can then signal emotions, Minoru Asada reported February 15 at the annual meeting of the American Association for the Advancement of Science. This artificial “pain nervous system,” as Asada calls it, may be a small building block for a machine that could ultimately experience pain (in a robotic sort of way). Such a feeling might also allow a robot to “empathize” with a human companion’s suffering
.

In a simplistic way, distinguishing "good" from "bad" input would make a robot more durable by allowing it to avoid hazards. But this is more nuanced and thus even more useful: allowing a robot to distinguish between a mere annoyance and real danger. That's what pain is for, and it's effective enough -- however imperfect -- to provide a huge advantage. This kind of discernment is what will enable a robot to resolve conflicts between directives, such as between completing a task and preserving itself. Minor annoyances should be ignored in the pursuit of a task, while major dangers should be avoided.

Empathy is more rarefied, but it requires a sense of self, a sense of other, and a sense of threat-reward. Having all three of those things does not guarantee it -- sadly, lots of humans lack empathy -- but you need those foundations in order to develop it. So if you want a robot to have empathy, you have to give it those pieces to build on.


Not only are such robots far off but we have no idea how to get there because programming a robot like Affetto (above) to mimic reactions is not the same thing as generating actual reactions. Affetto, however convincing the performance in an Uncanny Valley sense, is not feeling anything.

Here's one of the really big gaps. We DO have an idea how to get there. We analyze how life works, take that apart into its individual building blocks, and attempt to recreate them artificially one at a time. Then we try to put them together to make more and more complex things. We've gone from clunky robots that walk to graceful ones that can run and turn backflips to robots that can learn to walk on their own. That's the history of AI and robotics in a nutshell.

Here also is another huge problem: assuming that robots can't and don't feel anything. Remember when we had to explain that horses feel pain so you shouldn't beat them, women and blacks are people so you can't own them? We're headed for exactly the same problems with AI. They are entirely predictable and avoidable, but people will ram right into them because they believe that only biolife is real. It is just as bad to assume that robots can't feel as it is to anthropomorphize them too early.

And this is where we run into a serious problem: if we design robots to behave in ways that feeling creatures behave, how do we tell when they "really" develop feelings as opposed to following a simple program? That's not going to be easy. However, we can improve our accuracy by looking at the complexity. Does it have all the building blocks? If so, then true feeling is more likely than if it did not. Is the end result programmed, or only the base parameters? True feeling is less likely to emerge from the top down than the bottom up. Can it learn and grow? This is a biggie. This is where you can get spontaneous life when you didn't even intend it. Once you enable learning and enough resources for growth, sometimes you get results beyond your wildest imagination. And no, I don't usually sit down at a computer expecting to knock it up, even though my imagination includes an ulterior awareness that this can happen.

From an ethical perspective, if something seems to be aware enough to experience misery, we should not make it miserable; and if it seems aware enough to have personhood, we should not abuse or enslave it. Following these principles requires paying attention to make sure we identify possible creatures and persons around us so we do not harm them. Most of us are not Buddhist monks gently shooing spiders out of a temple, so the level does vary, but most of us are also not human traffickers. Society and individuals figure out where they want to set that threshold. But mistakes in setting it can have pretty awful results. Let's try to avoid this problem with AI by watching its development carefully to spot when we should go from being generally careful to specifically respectful.


“It’s a device for communication of the machine to a human.” While that’s an interesting development, “it’s not the same thing” as a robot designed to compute some sort of internal experience, he says.”

True, but communication is super important. It is useful long before we achieve any kind of AI awareness or sentience. A program can be designed to observe humans and calculate valence, that is, whether the user is having a positive or negative experience, which then informs the program's responses. There are already companies spending lots of money to improve valence because they want users to "like" the product well enough to keep using it. Conversely, if a device can communicate problems to a human -- this is stuck, that is causing damage -- then the human can help solve the problem. Since humans are not predisposed to care about computers or robots in general (but can personify ones they interact with regularly) it is often more effective to frame the communication in terms that humans already grasp instinctively. A human may be more likely to stop damaging a robot that acts as if it's in pain than one that flashes a light or buzzes a warning.

Once we put those two parts together -- a human communicating to a robot and a robot communicating to a human -- then we have a complete communication circuit. This is another important aspect of empathy: to develop it, we must be able to observe and understand how our choices affect others in meaningful ways, to perceive that we are making them happier or sadder, before we can care about doing one over the other.

More critically in the avoidance of Judgment Day, we need to show good examples when interacting with AI just as we do with children. If you abuse a baby human, it will grow up to hate you. If you abuse a baby dog, it will grow up to bite you. If you abuse a baby AI, it might decide to wipe out humanity in self-defense. So let's just avoid the plot of all those SF movies about killer robots by modeling basic decency instead of teaching them it's okay to hurt people.


Each robot was also given a choice between sharing points awarded for finding food, thus giving other robots’ genes a chance of surviving, or hoarding. In different iterations of the experiment, the researchers altered the costs and benefits of sharing; they found that, again and again, the robots evolved to share at the levels predicted by Hamilton’s equations.
BRANDON KEIM, “ROBOTS EVOLVE ALTRUISM, JUST AS BIOLOGY PREDICTS” AT WIRED (MAY 4, 2011)Paper. (open access)

Given that the robots were programmed to do those things, it’s no surprise that they did them
.

This is another bad jump. The robots weren't programmed to BE altrustic. They were programmed to meet certain goals of resource-gathering, and in different iterations of the experiment, altruism was more or less useful in that pursuit. They had a CHOICE whether or not to share. The outcome of that choice was not programmed, only the point of decision itself. So that was quite a useful exploration of how altruism can work under different environmental parameters.

To understand AI, you have to look at what exactly is being programmed. You also have to understand, as far as we've gotten in learning, how biolife works because in complex creatures that is a dynamic mix of preprogrammed instinct, lived experience creating acquired instinct, and thought about the current situation. If you think humans aren't programmed, try picking up something hot and see how long you can hold onto it before your hindbrain decides you are being stupid and pushes the emergency eject override function.


Indeed. One is reminded of Arthur’s sardonic comment in Camelot: “The adage ‘blood is thicker than water’ was invented by undeserving relatives.” Altruistic robots may have some applications in swarm robotics but what about their relevance to humans?

Just because a principle fails in some circumstances doesn't mean it's invalid. A principle that works in 99% of cases is damn good. One that works in 90% of cases is still a lot better than one that works in 60% of cases. Altruism is a way of improving the group's odds, which helps the individual because the group makes survival easier (in a cooperative species).

The relevance to humans is that an altruistic robot is less likely to shove us into the Matrix than a selfish robot. A Terminator originally designed to kill enemy soldiers is a hair's breadth from killing everything that moves. We should aim for altruistic robots because they are congruent with humanity's nature as a social species.


As noted in an earlier article, two definitions of altruism are in play and often conveniently confused: Hamilton’s definition, which originated in order to account for the behavior of social insects, vs. human decisions to show compassion. The confusion bolsters the cause of naturalism (nature is all there is), often called “materialism,” in the social sciences; hence it persists and continues to confuse.

Tch. They're not "different." Those are two aspects of the large and complex behavioral concept of altruism. There are many others.


One could probably program a robot to behave like a social insect, to at least some extent. However, no one has found a way to “program” compassion in humans, never mind robots.

Well, yes, biomimicry is a huge thing in robotics and AI, with social insects a popular model.

As for programming humans to be compassionate, evolution has already provided a solid foundation for that, to which we add by teaching children about compassion.

If we follow the same pattern with AI, then we are more likely to achieve compassionate AI than if we do not.


Feelings, whether for one’s own sufferings or, by extrapolation, those of others, are intrinsic to being alive. Thus, it is unclear, even conceptually, how to produce them in an artificial entity that by its very nature is not alive.

What the Thinker thinks, the Prover proves. If you have predetermined that something is not alive and cannot feel, then you don't bother to look at evidence for or against. And that way lies a repetition of history I do not wish to see.

Someone please keep this guy away from baby AI, I don't want to repeat the plots of all those movies. >_<
Tags: networking, safety, science
Subscribe

  • Photos: Sunset

    I had the camera out at dusk to take pictures of the completed trough pot project, so I got some sunset pictures too. I caught a glimpse of the…

  • How to Secure Trough Pots to a Bench

    Today's project was filling a set of trough pots and securing them to the benches of the new picnic table. (This is from Tuesday, but it's…

  • Photos: South Lot and Savanna

    These pictures are from the south lot and savanna. The pink peony is open under the fly-through feeder. I might call this flower style…

  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments