Will robots take over the world?

A statement by Joanna Bryson, after one solicited by Sunny Bains for a Focus magazine article in May 2004
(I've cleaned it up a little bit, but the content is the same.)

Note: I have a web site on this topic.

I believe that IF robots take over the world, it will not be the work of the robots. Robots are artifacts we construct, we imbue them with their goals and their abilities. So there is no question of whether robots will take over the world. The question is whether someone else will take over the world with robots.

This is actually a very interesting question for AI in general: When an intelligent artifact makes a mistake, who is at fault? The point of the paper I wrote with Phil Kime on this topic is that:

So a robot army going wrong and wiping out a village of civilians is just the same as a dam going wrong and wiping out a village of civilians. The manufacturers & operators of either artifact are the ones who must bear responsibility (& will probably spend a few decades suing each other).

Believe it or not, in the past animals have actually been hanged for murder, but that makes more sense to me than accusing a robot of murder, even if it "autonomously" made a decision that lead to someone's death.

The most effective argument I've heard against this stance is that if we use evolutionary algorithms, then we might create robots that will be automatically / autonomously greedy. In this case, the robots could consume all resources available, just like Tierra would consume all available disk space & CPU given half a chance on a computer.  I disagree. First of all, for the technological reasons that Sunny Bains mentions,  I think this is much more likely to be a problem for geneticists or nanotechnologists than for conventional roboticists (that is, I've never seen a robot that could actually fight anything much for a resource.) But secondly, again, this is no different in terms of a threat than current(?) research done into highly infectious diseases or genetically modified plants that use airborne pollination. Fixating on the AI aspect is wrong. The right thing to worry about is the social institutions that control our military and pharmaceutical labs and their related research.

In short, if you want to know what might go wrong with robots, look at the "justice" in Guantanamo Bay or the mine field victims of Afghanistan & Cambodia, not at Science Fiction.  Banning or blaming robots isn't the answer, paying attention to politics and government is.

June 2004, same purpose


No agent takes an action without motivation.  I don't mean something as complicated as "a good reason" --- I mean that there must be some motive force  that causes the action.  There must be something inside the agent which triggers it to act in some situation.

What motivations might lead robots to be dangerous?  Greed, a thirst for power, a desire for freedom.  As humans we can easily ascribe these things to robots because we have empathy for them.  That is, we identify with the robots, we think "if I was a robot, I wouldn't want to be a slave."  But while this is often a good way to reason about other people (since their brains work pretty much like yours), it isn't a good way to reason about a robot.  Nothing exists in a robot unless we put it there, because we are the ones who build robots, from scratch.  Of course, we may put some things in a robot by mistake, but if we do, we have an obligation to fix them.

So would anyone ever put greed into a robot?  Well, maybe.  There's a reason why humans are greedy -- it's because we need to take care of ourselves in order to survive and to have our children survive.  So similarly, we may want to instill our robots with desires, so that they can better take care of themselves (or us.)  We might want a robot to be greedy for attention, because it makes us feel needed (dog lovers know what I mean!)  We might want to make a robot feel greedy for energy, so that it will be careful not to waste any.  We might a robot seek independence, so that it doesn't bother us too much of the time but rather gets on with what it's supposed to do.

But there are two things to realize here. The first is about feelings. Although we can make a robot seek attention (for example, try to make sure it has eye contact), there would be no reason to make it feel lonely or unloved if it couldn't get that attention --- even if we knew how to do that (which we don't.)  Even if the robot was programmed to gather as much gold as it could find, it wouldn't need to feel happy when it had gold or unhappy when it didn't.  Even if we made the robot smile and say "Wow, I'm so happy!" when you pet it, that wouldn't in itself create the real feeling inside the robot of happiness.  So there is no reason to worry about ethical obligations toward a robot's feelings.

The second thing is that, since we are the people who build the robots, it is up to us to determine the goals, desires, and even feelings (if it's possible for us to figure out how to really program those) the robot will have. Remember, robots aren't people, they are artifacts we build, so saying this isn't cruel.  In fact, we actually have ethical obligations not to build in things that would ever make us have to choose between a robot's needs or a human's.  Robots can have their experience backed up on disk, they can have their bodies mass produced, so we should never have to worry about losing or damaging a robot that we're particularly invested in.

Intelligence means being able to make choices, being able to express more than one behavior.  As we make our machines smarter, we should be able to make them better able to stay out of our way.  For example, if land mines become robots, we should legislate that they be programmed to surrender at the end of a war (just by exposing themselves).  This could be much more effective than the current laws that say humans have to keep maps of where mines are laid.  This won't take any complicated motivation -- the robots won't really know if they are at war or at peace.  They'll just know if they have a "hide" signal or a "show yourself" signal, and they'll do what they're told.  Just like a light, when you tell it "off" or "on" with a switch.

If you think of a robot as just a machine like the ones you own, only with a little more knowledge of their surroundings, and the ability to respond to that knowledge, you won't be far from wrong.  A robot isn't that different from an alarm clock or a washing machine.