Last Modified 3 January 2019; previously last modified 5 May, 2001

Intelligence by Design

"Intelligence by Design" is the title of my PhD.  My dissertation is about the systems engineering of real-time, human like artificial intelligence (AI).  I defended it on 30 April 2001 at the MIT AI Lab, my committee was Lynn Andrea Stein (supervisor), Bruce Blumberg, Olin Shivers, and Gill Pratt (none of us stayed at MIT.)  You can get the formal PhD version of Intelligence by Design from MIT, but the AI lab for decades produced more polished hard-copy technical reports they distributed globally. These were produced after the actual dissertation because you didn't have to pay tuition any longer then. Mine was during a brief weird period where these were made the size of a paperback. Anyway, here is the Technical Report version of Intelligence by Design.

My group still develops software to support the Behaviour Oriented Design of AI, see the AmonI Software Page.

last modified 5 May, 2001

Here are the main slides. They are in pdf - I showed them using acroread in slide mode.

I also had some overhead slides in order to show the reactive plans in parallel to the behavior libraries for examples. The first overhead slide demonstrates a non-real-time drive, and the second shows that it really ran (for a more convincing account, see my SAB 2000 paper). Next I showed a video of a robot (see below). The next two slides are the reactive plans for the robot (the second is only changes to the first) and go in parallel to slides after "examples" after Drives in the main slides (see above.) The final slides should line up with the behavior libraries for the Transitive Inference behaviors in the main slides.

The video. This is 38M, compressed (gzipped)! The full video runs 3 minutes. It looks a little weird because it's PAL -> NTSC -> DV -> Quicktime (it's in quicktime format now.) This is old work, but got included in the talk because it's a vivid illustration that:

  • Real-time Behavior Oriented Design (BOD) systems run smoothly and continuously despite the fact that POSH action selection operates sequentially, because the behaviors can operate in parallel.
  • Behaviors should be decomposed by adaptive requirements. For example, the infra-red sensors need no memory, the sonar needs about a half second of memory, and the bumpers need memory for a minute or so (until the robot moves around whatever it just hit.)
  • This system first made me think about Behavior-Oriented Design, when I was reimplementing the sensor-fusion routines after rebuilding my action-selection architecture in C++ (it had been in Perl 5.003). Previously I had just been thinking about action selection and took behavior decomposition more or less for granted.

    The system behind the first clip in the video is discussed in my MPhil dissertation, the second two scale up from that system to add short-term episodic memory and long-term learning. The first clip in the video is of the fully autonomous robot with a persistent goal (I have to make it think it's really stuck before it changes the direction it's trying to go in.) The middle and end of that clip is at 2x real time. The second clip has actually restricted this behavior so that it only does a "leg" - it goes until it can't find a way forward. This refinement makes it easier for the robot to learn a map by instruction. The middle of the second clip is sped up 3x. The final clip shows the robot learning a very simple little map. The robot asks where to go when it is learning, but says "pick " when it remembers its own way. It says "hi" when it enters a "decision space" (where it could go multiple directions, unlike a corridor) and "bye" when it leaves one. Notice that towards the beginning of the third clip the robot says "neighbor! which way?" This is to indicate that it knows a way to go, but it has rejected that direction because it tried it recently and failed to advance (I don't know why it stopped, maybe a sonar glitch.) This shows the robot using its episodic memory to override its standard navigation routine. The squeaky voice towards the end of the fast part of this clip (4x!) is me telling the camera man to move around and show that I'm not teleoperating the robot (yes, that's me waving my hands at the end!)

    page author: Joanna Bryson