Last revised June 2018 (just publications & media).
For my latest views, see also my blogposts on AI and
AI Ethics: Artificial
Intelligence, Robots, and Society
Everyone should think about the ethics of the work they do, and
the work they choose not to do. Artificial Intelligence and
robots often seem like fun science fiction, but in fact already
affect our daily lives. For example, services like
Google and Amazon help us find what we want by using AI.
They learn both from us and about us when we use them. The
USA and some other countries and organizations now employ robots
Since 1996 I have been writing about AI and Society, including
maintaining this web page. I was worried because some
researchers got into the news by claiming that AI or intelligent
robots would one day take over the world. This page was
originally about why that wasn't going to happen.
But by 2008 the USA had more robots in Iraq than allied troops
(about 9,000). Also, several prominent scientists began
publicly working on the problem of making robots ethical.
The problem here is not the robots taking over the world, but that
some people want to pretend that robots are responsible for
themselves. In fact, robots belong to us. People,
governments and companies build, own and program robots.
Whoever owns and operates a robot is responsible for what it does.
The purpose of this page is to
explain why people worry about the wrong things when they worry
I hope that by writing this page, I can help us worry about the
Why Build AI?
If robots might take over the world, or machines might learn to
predict our every move or purchase, or governments might try to
put the blame robots for their own unethical policy decisions,
then why would anyone work on advancing AI? My personal
reason for building AI is simple: I want to help people
Our society faces many hard problems, like finding ways to work
together yet maintain our diversity, avoiding and ending wars, and
learning to live truly sustainably (where our children consume no
more space and time than our parents, and no more other resources
than can be replaced in a lifetime) while still protecting human
rights. These problems are so hard, they might actually be
impossible to solve. But building and using AI is one way we
might figure out some answers. If we have tools to help us
think, they might make us smarter. And if we have tools that
help us understand how we think, that might help us find
ways to be happier.
Of course, all knowledge and tools, including AI, can be used for
good or for bad. This is why it's important to think about
what AI is, and how we want it to be used. This page is
designed to help people (including me) think about the ethics of
Why We Shouldn't Fear Robots –
They Aren't People (or Even Apes)
As I said, I think most people are worrying about the wrong things
when they worry about Robots and AI. First, here are some
reasons not to worry.
1) AI has the same
ethical problems as other, conventional artifacts.
In the mid-1990s I attended a number of talks that made me
realize that some people really expected AI to replace
humans. Some people were excited about this, and some were
afraid. Some of these people were well-known
scientists. Nevertheless, it seemed to me that they were all
making a very basic mistake. They were afraid that whatever
was smartest would "win", somehow. But we already have
calculators that can do math better than us, and they don't even
take over the pockets they live in, let alone the world.
My friend Phil
Kime agreed with me, and added that he thought the problem
was that people didn't have enough direct, personal experience of
AI to really understand whether or not it was human. So we
wrote one of my first published papers, Just Another Artifact: Ethics and the
Empirical Experience of AI. We first wrote it in 1996;
it eventually got partially published in 1998 in a cybernetics
workshop. Recently we decided it was worth rewriting and
publishing the paper properly, so a radically updated version Just
an Artifact: Why Machines are Perceived as Moral Agents,
appeared in the proceedings of The Twenty-Second International
Joint Conference on Artificial Intelligence (IJCAI '11).
We argue that realistic experience of AI would help us better
judge what it means to be human, and help us get over our
over-identification with AI systems. We pointed out that
there are ethical issues with AI, but they are all the
same issues we have with other artifacts we build and value or
rely on, such as fine art or sewage plants.
2) It's wrong to exploit
people's ignorance and make them think AI is human.
It is not enough for experts to understand the role of AI in
society. We also have a professional obligation to
communicate that understanding to non-experts. The people
who will use and buy AI should know what its risks really
are. Unfortunately, it's easier to get famous and sell
robots if you go around pretending that your robot really needs to
be loved, or otherwise really is human – or super human! In
2000 wrote an article about this called A Proposal for the Humanoid
Agent-builders League (HAL). This was presented at The
Symposium on Artificial Intelligence, Ethics and (Quasi-)Human
Rights at AISB
2000, which was a great meeting. In the paper I
propose creating a league of programmers dedicated to opposing the
misuse of AI technology to exploit people's natural emotional
empathy. The slogans would be things like "AI: Art not Soul"
In 2000 I didn't know that the US military would soon try to
give robots ethical obligations, so the whole paper is written in
a fairly humorous style. As AI has gotten better, the issues
have gotten more serious. Fortunately, academics and other
experts are also getting serious about them too. In 2010 I
was invited to help the British Robotics funding agency, the
EPSRC, work on this topic, and was heavily involved in writing
of Robotics. So a lot of the ideas in my HAL paper are
now at least informal UK policy. The five principles are:
Robots should not be designed as weapons, except for national
Robots should be designed and operated to comply with existing
law, including privacy.
Robots are products: as with other products, they should be
designed to be safe and secure.
Robots are manufactured artefacts: the illusion of emotions
and intent should not be used to exploit vulnerable users.
It should be possible to find out who is responsible for any
My students and I are among the many researchers who work on
building artificial consciousness and
synthetic emotions. These aren't any more
magic or deserving of ethical obligation than artificial hands or
legs. In humans
consciousness and ethics are associated with our morality, but that
is because of our evolutionary and cultural history. In
artefacts, moral obligation is not tied by either logical or
mechanical necessity to awareness or feelings. This is one of
the reasons we shouldn't make AI responsible: we can't punish it in
a meaningful way, because good AI systems are designed to be
modular, so the "pain" of punishment could always be excised, unlike
By coincidence, at the same time I was writing the final version
of that book chapter, I got asked to comment on an article by Anne
Foerst called Robots and Theology. Anne and I had worked
on the Cog project at the MIT AI
Laboratory together in 1995. In fact, we'd argued about this
before, but I think the arguments are better phrased in that
issue. Anne has the interesting perspective that robots are
capable of being persons and knowing sin, and as such are a part
of the spiritual world. I argue in Building Persons is a Choice
that while it is interesting to use robots to reason about what it
means to be human, calling them "human" dehumanises real
people. Worse, it gives people the excuse to blame robots
for their actions, when really anything a robot does is entirely
our own responsibility.
I don't tend to say "Robots should be slaves" any more though,
because even though to me it is incomprehensible that people
should ever be owned, I do know that slaves have always been
human. And I don't want people to think that robots should
or even could be human. The point of the chapter was to only
communicate the ethical implications of that owned status.
Because we build and own robots, we shouldn't ever want them to be
Why We Should Worry About AI
Being worried about the wrong things doesn't mean that there's
nothing to worry about. Artificial Intelligence is not as
special as many people think, but it is
further accelerating a rapidly-accelerating phenomenon that's been
going on for about 10,000 years: human culture. Human
culture is changing almost every aspect of life on earth,
particularly human society.
4) Human culture is
already a superintelligent machine turning the planet into apes,
cows, and paper clips.
One of the reasons I object to AI scaremongering is that even
where the fears are realistic, such as Nick
Bostrom and colleague's description of overwhelming,
self-modifying superintelligence, making AI into the
bogeyman displaces that fear 30-60 years into the future. In
is here now, and even without AI, our hyperconnected
socio-technical culture already creates radically new dynamics and
challenges for both human society and our environment.
Bostrom writes about (among
other things) a future machine intelligence autonomously
pursuing a worthwhile goal might incidentally convert the planet
into paper clips. We might better think of our current
culture itself as the superintelligent but non-cognizant machine –
a machine that has learned to support more biomass on the planet
than ever before (by mining fossil fuels) but is changing all that
life (at least the large animals) into just a few species (humans,
dogs, cats, sheep, goats, and cows). No one ever
specifically intended to wipe out the rest of the large animals
and other biodiversity on the planet, but we're doing it.
Similarly, no one specifically decided that children weren't
sufficiently monitored by their parents up until the 1990s, but
and parenthood have been entirely transformed in just a few
decades. These are just two consequences or our expanding
cognition, and AI is very much a part of that.
In 2014 I wrote a very academic book chapter about this called Artificial
Intelligence & Pro-Social Behaviour. As I
said, these changes aren't entirely due to AI. They're also
the result of better communication brought about by mobile phones
and social media, the simple fact that there are more people, and
processes of cultural and legislative change. But AI,
particularly machine learning, plays a large and growing role.
5) Big data + better
models = ever-improving prediction, even about individuals.
AI and computer science, particularly machine learning but also
HCI, are increasingly able to help out research in the social
sciences. Fields that are benefiting include political
science, economics, psychology, anthropology and business /
marketing. As I said at the top of the page,
understanding human behaviour may be the greatest benefit of
artificial intelligence if it helps us find ways to reduce
conflict and live sustainably. However, knowing fully well
what an individual person is likely to do in a particular
situation is obviously a very, very great power. Bad
applications of this power include the deliberate addiction of
customers to a product or service, skewing vote outcomes through
disenfranchising some classes of voters by convincing them their
votes don't matter, and even just old-fashioned stalking.
It's pretty easy to guess when someone will be somewhere these
As we in the computational social sciences learn more and more,
our models of human behaviour get better and better. As our
models improve, we need less and less data about any particular
individual to predict what they are going to do. So just
practising good data hygiene is not enough, even if that were a
skill we could teach everyone. My professional opinion is
that there is no going back on this, but that isn't to say society
is doomed. Think of it this way. We all know that the
police, the military, even most of our neighbours could get into
our house if they wanted to. But we don't expect them to do
that. And, generally speaking, if anyone does get into our
house, we are able to prosecute them legally, and to claim any
damages back from insurance. I think our personal data
should be like our houses. First of all, we shouldn't ever
be seen as selling our own data, just leasing it for a particular
purpose. This is the model software companies already use
for their products; we should just apply the same legal reasoning
to we humans. Then if we have any reason to suspect our data
has been used in a way we didn't approve, we should be able to
prosecute. That is, the applications of our data should be
subject to regulations that protect ordinary citizens from the
intrusions of governments, corporations and even friends.
Persons is a Choice an invited commentary on an article by
Anne Foerst called Robots and Theology, in Erwägen Wissen Ethik20(2):195-197
(2009). It isn't that AI couldn't conceivably deserve ethical
obligation, rather it would be unethical for us to allow it to.
January 2015: I debated James Barratt again for
the Channel 4 News feature Will
super-intelligent machines kill us all?. Unfortunately,
that page has a lot of Bostrom / Musk on it, but also our video.
We were supposed to be talking about middle-class income, but
Barratt made it about battlefield robots. Of course, I know a
lot about those too...
April 2011: The
EPSRC released their Principles
of Robotics. I'm one of the authors and contributed
a lot to the text there. The principles were written at a
special meeting the EPSRC held on robot ethics in September
January 2011: I've accepted an invitation to join Lifeboat,
though I don't know much about them. Apparently I am helping
safeguard humanity from robots and AI. If you send me
email with any comments pro and con about Lifeboat, I'd
August 2008: I'm one of the experts interviewed for
the Heart Robot
Thanks for linking to my 1998 paper (Just Another
Artifact: Ethics and the Empirical Experience of AI), but I
think your argument is a gross oversimplification of my and
Phil Kime's point. Of course autonomous robot weapons
can kill you, and are killing people now. But it isn't
because some AI has turned evil. AI is no more to blame
than other artifacts of our culture, like our foreign
policy. Rather than worrying about AI specifically,
people should be worrying about government, culture and
decision making in general. The threats (and promises)
of AI are real, but not as unique as people think. I
believe the "singularity" & "ethical robots" (e.g. Arkin)
debates are a distraction from the real problem of designing
and choosing appropriate governing techniques and assigning
appropriate responsibility and blame for societal-level
decisions that affect us all.
July 2000: A snippet of private email & some
off-the-record comments on robots taking over the world were reported
by The Register. I didn't correct the record until the
same text mysteriously turned up in The Guardian four
years later (and therefore on the first page of Google searches
for me). Blay & I apparently got the first ever
"correction"/apology from Bad Science, but they still
got my title & institution wrong.
On a less related note than a lot of people think, I also write
about consciousness, both machine &
not. This work came around partly because so many people
associate consciousness and ethics, but do they know why?
for the tricky bit... (originally "Consciousness Is Easy,
but Learning Is Hard"), invited article for The
Philosopher's Magazine 28(4):70-72. Explains that
everything with RAM has functional self awareness, video cameras
have perfect memory, what makes us intelligent (and is
computationally difficult) is generalising from experience,
which involves forgetting and unconsciousness. (To be
honest, I think our obsession with consciousness comes from our
lack of conscious access to so much of our own minds, but I
haven't written about that. Yet.)
Similarly, here's some of my papers on emotions,
which I also don't believe determine ethical obligation, but are
clearly involved in humans' ethical intuitions:
Creating Friendly AI
from the singularity
folks. One day in 1995, my friends from the MIT AI Lab and
I went over to the Media Lab to see a talk about the "coming
singularity" (when AI becomes smarter than people) by Vernor
Vinge. That talk was one of the reasons I wanted to write
"Just Another Artifact". We left the talk before it was
over because it generally seemed silly and was getting
repetitive and we needed to get back to work. But on the
way back, while I was listening to some of my fairly brilliant
friends (e.g. Charles Isbell and Carl de Marken) belittle the
chances of their AI ever being able to take over a toaster, it
did remind me of the scientists of Los Alamos betting
facetiously on the effect size of the first atomic bomb.
Here's what Vinge
was thinking in 2008. A bit more positive than a
decade earlier, but otherwise similar. I do think it's
good to have people who really think about the long term.
Some of the arguments in the AI Companions piece were
inspired by White Dot.