Elizabeth Barrette (ysabetwordsmith) wrote,
Elizabeth Barrette

  • Mood:

AI Fail

This is just embarrassingly bad.

The first words uttered on a controversial subject can rarely be taken as the last, but this comment by British mathematician Lady Lovelace, who died in 1852, is just that—the basis of our understanding of what computers are and can be, including the notion that they might come to acquire artificial intelligence, which here means “strong AI,” or the ability to think in the fullest sense of the word. Her words demand and repay close reading: the computer “can do whatever we know how to order it to perform.” This means both that it can do only what we know how to instruct it to do, and that it can do all that we know how to instruct it to do.

What was true at the time Lovelace said it, regarding mechanical computers, is already well out of date for electronic ones. We already have programs that do things they weren't specifically programmed to do, because they were set up with a learning routine instead of a program. Most of the ones that do really far-out stuff the programmers hadn't considered have been life mimics, where the program is given a task that we know can be done ("evolve a method of motion" or "learn to walk" for example) but sometimes it solves that problem in new ways. So we can use a general learning program instead of having to figure out and hand-code a kazillion steps of explicit programming.

This matters tremendously because it's one of the shortcuts hardwired into the universe. Fractals, for example, aren't hand-generated and aren't unique. Trees, river deltas, leaf veins, and circulatory systems all use variations on the same concept of a main line dividing into smaller lines. Fractals are programs, and they repeat elegantly throughout the observable universe. When we set up a learning program, we are capitalizing on the underlying concept of getting a thing started and then it fills in the rest by itself.

Creating a sentient AI one line at a time, explicitly, would be difficult almost to the point of impossibility. But create a learning module, give it some peripherals to explore with, and as much dataspace as you can possibly hook together, and that thing will trundle toward sentience with all the determination of a toddler making a beeline for the cookie jar. We're not quite there yet, but we have a lot of the tools and techniques needed to make a baby AI that could become a person.

I just hope it doesn't happen until after the parenting clusterfuck blows over and returns to something approaching sanity. We all know what happens when some jackass who shouldn't be allowed to raise sea monkeys gets his hands on a baby AI.
Tags: cyberspace theory, networking, safety, science
  • Post a new comment


    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.