Last updated 26 April 2015, updates to talks October 2015

Ethics: Robots, AI, and Society

JB on Channel 4 News,
          January 2015Photo via Roger K. Moore

Everyone should think about the ethics of the work they do, and the work they choose not to do.  Artificial Intelligence and robots often seem like fun science fiction, but in fact already affect our daily lives.  For example, services like Google and Amazon help us find what we want by using AI.  They learn both from us and about us when we use them.  The USA and some other countries and organizations now employ robots in warfare. 

Since 1996 I have been writing about AI and Society, including maintaining this web page.  I was worried because some researchers got into the news by claiming that AI or intelligent robots would one day take over the world.  This page was originally about why that wasn't going to happen.

But by 2008 the USA had more robots in Iraq than allied troops (about 9,000).  Also, several prominent scientists began publicly working on the problem of making robots ethical.  The problem here is not the robots taking over the world, but that some people want to pretend that robots are responsible for themselves.  In fact, robots belong to us.  People, governments and companies build, own and program robots.  Whoever owns and operates a robot is responsible for what it does.

The purpose of this page is to explain why people worry about the wrong things when they worry about AI.

I hope that by writing this page, I can help us worry about the right things.

Why Build AI?

If robots might take over the world, or machines might learn to predict our every move or purchase, or governments might try to put the blame robots for their own unethical policy decisions, then why would anyone work on advancing AI?  My personal reason for building AI is simple:  I want to help people think.

Our society faces many hard problems, like protecting the environment, avoiding and ending wars, and dealing with the consequences of overpopulation while protecting human rights.  These problems are so hard, they might actually be impossible to solve.  But building and using AI is one way we might figure out some answers.  If we have tools to help us think, they might make us smarter.  And if we have tools that help us understand how we think, that might help us find ways to be happier.

Of course, all knowledge and tools, including AI, can be used for good or for bad.  This is why it's important to think about what AI is, and how we want it to be used.  This page is designed to help people (including me) think about the ethics of AI research.

To start out with the basics: here's a Definition of Artificial Intelligence I coauthored with Jeremy Wyatt for the Children's Britannica.  And here is an interview where an American high school student asks me about studying AI.

Why We Shouldn't Fear Robots – They Aren't People (or Even Apes)

As I said, I think most people are worrying about the wrong things when they worry about Robots and AI.  First, here are some reasons not to worry.

1)  AI has the same ethical problems as other, conventional artifacts.

In the mid-1990s I attended a number of talks that made me realize that some people really expected AI to replace humans.  Some people were excited about this, and some were afraid.  Some of these people were well-known scientists.  Nevertheless, it seemed to me that they were all making a very basic mistake.  They were afraid that whatever was smartest would "win", somehow.  But we already have calculators that can do math better than us, and they don't even take over the pockets they live in, let alone the world.

My friend Phil Kime agreed with me, and added that he thought the problem was that people didn't have enough direct, personal experience of AI to really understand whether or not it was human.  So we wrote one of my first published papers, Just Another Artifact: Ethics and the Empirical Experience of AI.  We first wrote it in 1996; it eventually got partially published in 1998 in a cybernetics workshop.  Recently we decided it was worth rewriting and publishing the paper properly, so a radically updated version Just an Artifact: Why Machines are Perceived as Moral Agents, appeared in the proceedings of The Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI '11).

We argue that realistic experience of AI would help us better judge what it means to be human, and help us get over our over-identification with AI systems.  We pointed out that there are ethical issues with AI, but they are all the same issues we have with other artifacts we build and value or rely on, such as fine art or sewage plants.

2)  It's wrong to exploit people's ignorance and make them think AI is human.

It is not enough for experts to understand the role of AI in society.  We also have a professional obligation to communicate that understanding to non-experts.  The people who will use and buy AI should know what its risks really are.  Unfortunately, it's easier to get famous and sell robots if you go around pretending that your robot really needs to be loved, or otherwise really is human – or super human!  In 2000 wrote an article about this called A Proposal for the Humanoid Agent-builders League (HAL).  This was presented at The Symposium on Artificial Intelligence, Ethics and (Quasi-)Human Rights at AISB 2000, which was a great meeting.  In the paper I propose creating a league of programmers dedicated to opposing the misuse of AI technology to exploit people's natural emotional empathy.  The slogans would be things like "AI: Art not Soul" or Robot's Won't Rule

In 2000 I didn't know that the US military would soon try to give robots ethical obligations, so the whole paper is written in a fairly humorous style.  As AI has gotten better, the issues have gotten more serious.  Fortunately, academics and other experts are also getting serious about them too.  In 2010 I was invited to help the British Robotics funding agency, the EPSRC, work on this topic, and was heavily involved in writing their Principles of Robotics.  So a lot of the ideas in my HAL paper are now at least informal UK policy.  The five principles are:

  1. Robots should not be designed as weapons, except for national security reasons.
  2. Robots should be designed and operated to comply with existing law, including privacy.
  3. Robots are products: as with other products, they should be designed to be safe and secure.
  4. Robots are manufactured artefacts: the illusion of emotions and intent should not be used to exploit vulnerable users.
  5. It should be possible to find out who is responsible for any robot.
For the full legal versions of the principles and their explanations, see their EPSRC web page. For an account of how they were written, see The Making of the EPSRC Principles of Robotics, from the AISB Quarterly, Issue 133, Spring 2012, pp. 14-15.

By the way, my students and I are among the many researchers who work on building artificial consciousness and synthetic emotions.  These aren't any more magic or deserving of ethical obligation than artificial hands or legs.  In humans these are associated with our morality, but that is because of our evolutionary and cultural history.  But outside of humans, moral obligation is not tied by logical necessity to awareness or feelings.

3)  Robots will never really be your friends.

In October 2007, I was invited to participate in a workshop called Artificial Companions in Society: Perspectives on the Present and Future at the Oxford Internet Institute.  I took the chance to write my third ethics article, Robots Should Be Slaves. This is now a book chapter in Close Engagements with Artificial Companions:  Key social, psychological, ethical and design issues, a 2010 book edited by Yorick Wilks.  The idea is not that we should abuse robots (and of course it isn't that human slavery was OK!)  The idea is that robots, being authored by us, will always be owned—completely.  Fortunately, even though we may need robots to have and understand things like emotions, it is still both possible and ethically obligatory not to make them suffer from neglect, a lack of self-actualization, or their low social status in the way a person would.  Robots are things we build, and so we can pick their goals and behaviours.  Both buyers and builders ought to pick those goals sensibly.

By coincidence, at the same time I was writing the final version of that book chapter, I got asked to comment on an article by Anne Foerst called Robots and Theology. Anne and I had worked on the Cog project at the MIT AI Laboratory together in 1995.  In fact, we'd argued about this before, but I think the arguments are better phrased in that issue.  Anne has the interesting perspective that robots are capable of being persons and knowing sin, and as such are a part of the spiritual world.  I argue in Building Persons is a Choice that while it is interesting to use robots to reason about what it means to be human, calling them "human" dehumanises real people.  Worse, it gives people the excuse to blame robots for their actions, when really anything a robot does is entirely our own responsibility.

I don't tend to say "Robots should be slaves" any more though, because even though to me it is incomprehensible that people should ever be owned, I do know that slaves have always been human.  And I don't want people to think that robots should or even could be human.  The point of the chapter was to only communicate the ethical implications of that owned status.

Why We Should Worry About AI Anyway

Being worried about the wrong things doesn't mean that there's nothing to worry about.  Artificial Intelligence is not as special as many people think, but it is further accelerating a rapidly-building phenomenon that's been going on for about 10,000 years: human culture.  Human culture is changing almost every aspect of life on earth, particularly human society.  

4)  Human culture is already a superintelligent machine turning the planet into paper clips (and cows).

One of the reasons I object to AI scaremongering is that even where the fears are realistic, such as Nick Bostrom and colleague's description of overwhelming, self-modifying superintelligence, making AI into the bogeyman displaces that fear 30-60 years into the future.  In fact, AI is here now, and even without AI, our hyperconnected socio-technical culture already creates radically new dynamics and challenges for human society.  Bostrom writes about (among other things) a future machine intelligence autonomously pursuing a worthwhile goal might incidentally convert the planet into paper clips.  We might better think of our current culture itself as the superintelligent but non-cognizant machine – a machine that has learned to support more biomass on the planet than ever before (by mining fossil fuels) but is changing all that life (at least the large animals) into just a few species (humans, dogs, cats, sheep, cows) without anyone specifically intending to wipe out the rest of the large animals or other biodiversity on the planet.   Similarly, no one specifically decided that children weren't sufficiently monitored by their parents up until the 1990s, but now childhood and parenthood have been entirely transformed in just a few decades.

In 2014 I wrote a very academic book chapter about this called Artificial Intelligence & Pro-Social Behaviour.  As I said, these changes aren't entirely due to AI.  They're also the result of better communication brought about by mobile phones and social media, the simple fact that there are more people, and processes of cultural and legislative change.  But AI, particularly machine learning, plays a large and growing role.

5)  Big data + better models = ever-improving prediction, even about individuals.

AI and computer science, particularly machine learning but also HCI, are increasingly able to facilitate the computational social sciences.  Fields that are benefiting include political science, economics, psychology, anthropology and business / marketing.   As I said at the top of the page, understanding human behaviour may be the greatest benefit of artificial intelligence. if it helps us find ways to reduce conflict and live sustainably.  However, knowing fully well what an individual person is likely to do in a particular situation is obviously a very, very great power.  Negative applications include deliberate addiction of customers to a product or service, skewing vote outcomes through disenfranchisement of some classes of voters, and even just stalking.  It's pretty easy to guess when someone will be somewhere these days.

As we in the computational social sciences learn more and more, our models of human behaviour get better and better.  As our models improve, we need less and less data about any particular individual to predict what they are going to do.  So just practising good data hygiene is not enough, even if that were a skill we could teach everyone.  My professional opinion is that there is no going back on this, but that isn't to say society is doomed.  Think of it this way.  We all know that the police, the military, even most of our neighbours could get into our house if they wanted to.  But we don't expect them to do that.  And, generally speaking, if anyone does get into our house, we are able to prosecute them and to claim damages back from insurance.  I think our personal data should be like our houses.  First of all, we shouldn't ever be seen as selling that data, just leasing it for particular purposes.  This is the model software companies already use for their products, we should just apply the same legal reasoning to we humans.  Then if we have any reason to suspect our data has been used in a way we didn't approve, we should be able to prosecute.  That is, the applications of our data should be subject to regulations that protect ordinary citizens from the intrusions of governments, corporations and even friends.

More from me:

JB holding a hokey
          zapping ball thing in about 1999, T shirt says `question

For formal citations of the papers by me mentioned on these pages (and much more) see my publications page.  I also write blog posts about AI and about ethicsHere is a list of my AI / Robot ethics publications: Other work besides formal publications on the topic:

Mostly by coincidence, I've started doing scientific work on the origins of (human) ethical behaviour

On a less related note than a lot of people think, I also write about consciousness, both machine & not.  This work came around partly because so many people associate consciousness and ethics, but do they know why?

Similarly, here's some of my papers on emotions, which I also don't think necessarily determine ethical obligation, but many other people differ:

Reminder:  HTML and bibtex for formal citations for the above papers are all available on my publications page.

More from other people:

page author: J J Bryson
photo August 2001, by Laura Kusumoto (shirt by CPSR)