CM10135 / Programming II:   Lecture 8


Applications of Search: Artificial Intelligence


I. House Keeping

  1. Finish "throwing throwables", show them the program.
  2. Correction to `simple' hash hack (in slides now) -- wrap at end (++ top, -- bottom).
  3. Correction to series for insertion & selection sort: (also now in slides)
    1. Amounts to: 
      for n do
          for 1, 2, 3, 4...n-1 do  (average value (n-1)/2)
    2. This is equiv to the arithmetic series (1+2+3+4+...n-1) == n*(n+1)/2 
    3. n*(n+1)/2 = 1/2*n2 + 1/2*n  which means its O(n2)  -- n2 dominates

II. What is the Complexity of Intelligence?

  1. We'll start out by thinking about a simple game.
    1. Games aren't what intelligence was developed for.
    2. But they are a representation of the kinds of reasoning & competitions we do in social domains.
    3. Often used traditionally in AI, somewhat controversial now (more about this at the end.)
    4. Good intuition pumps.
  2. Tic-tac-toe (noughts & crosses)
    1. Initial board is empty.
    2. First player can choose one of 9 places to go.
    3. Next player can choose one of 8 places to go
    4. Game can't take more than 9 steps.
    5. Object is to get three characters in a row --- many games no one wins.
    6. Web pages with pictures: http://www.ics.uci.edu/~eppstein/180a/970417.html, http://www.gamasutra.com/features/20000626/brockington_pfv.htm
  3. Complexity is vaguely exponential!
  4. Think about chess -- that's really exponential...
  5. Most AI problems are in NP
    1. Non-polynomial --- essentially computationally intractable to find (best/optimal) solution.
    2. But if you do find solution, can prove it in polynomial time (this makes NP different from EXP).
    3. Worse class of problems is when you can't recognize when you've happened to find a solution!
  6. Empirically, many problems we work on seem similar
    1. How long would it have taken you to invent quick sort?
    2. How long did it take you to understand that it would work?
    3. The answer to question 1 might actually be shorter than you think, or sometimes even shorter than question 2, see section on expertise below.
  7. If a problem is polynomial or easier, can usually write a conventional algorithm to solve the problem.

III. Depth First vs. Breadth First Search

  1. Depth first search looks all the way down a tree's branches until it finds a solution.
    1. start at the root
    2. look for your answer
    3. if not found
      1. for each child
        1. depth first search
  2. Breadth first search looks across the tree at one depth, and then on to the next level
    1. breadth-first-search (list)  // initially list only contains the root
    2. for each thing in the list
      1. look for the answer
      2. if not found
        1. append item's children to the end of list
  3. Depth first search is faster, because it doesn't store as much state.  But you can't know if you have the best answer (least amount of steps) unless you look at the whole tree!
  4. Breadth first search guarantees the shortest answer, but is impossible in cases like chess --- requires more memory than particles in the universe.
  5. But depth first has another problem --- may enter an infinite loop!
    1. e.g. in chess -- can move a rook back & forth forever.
  6. This is when you need AI --- when no algorithm can solve the problem.
  7. Someone who missed lecture asked for a diagram for this.  Rather than draw one myself, here's one from Ralph Morelli's AI Lecture notes:

    Consider the following graph:

    Graph Search

    Depth First Search examines the nodes in the following order:

      A, B, E, K, S, L, T, F, M, C, G, N, H, O, P, U, D, I, Q, R, S, T, U

    Breadth First Search examines the nodes in the following order:

      A, B, C, D, E, F, G, H, I, J, K, L, M, N, O, P, Q, R, S, T, U

IV. Heuristic Search

  1. Want ways to prune the tree -- make there be less branches.
  2. If you are playing an opponent, assume that each player will avoid losing games, ignore those branches.
    1. This is called "mini-max search" because you alternately minimize & maximize your own expected outcome as you go through the tree.
  3. Estimate the value of branches & go down the ones that look most profitable
    1. Estimates are based on heuristics, rules that are often but not always right.
      1. E.g. you might guess that the shortest way across town on the map is the fastest, but really the fastest way depends on traffic, width of streets, tourists, shops etc.
      2. In chess, a common heuristic is to count how many pieces you have left, giving extra value to more powerful pieces.
    2. A* search is kind of like breadth first search, but you keep sorting the list, & keep going down the shortest / cheapest path you've found so far.
      1. A*-search (list)  // initially list only contains the root
      2. for the cheapest thing in the list
        1. check if you have the solution, if so use it
        2. take item off list, remembering the cost associated with it,
        3. estimate cost of all children, summing their cost with parent's cost.
        4. sort children into list
      3. If you can guarantee that your heuristic never underestimates the cost of performing the action (overestimating is OK) then this will find the optimal solution.
      4. A* search is used in almost all VR computer games for navigation:  choosing how to get to a goal location.

V. Parallel Search

  1. One way to solve the problem of combinatorics is to have the search run in parallel.
  2. Every time you have multiple options, have one thread take each option.
  3. Think of people exploring tunnels in Scooby Doo.
  4. If you could get enough threads, & you had some way to let all of them know when one of them found a solution, then you could turn all NP problems into P problems.
  5. Not something you can do on one PC -- threads share the same processor, no big win.
  6. Even massively parallel computers usually only have 1,000 processors, don't get you that far in chess.
  7. Solutions people are currently working on:
    1. Quantum computing :
      1. particles can be in many places at one time 
      2. weird!  but has really been done for a small number of bits.
    2. Biological computing:
      1. use genetics to create and & or gates in bacteria.
      2. Takes maybe 48 hours for one operation, BUT can be done billions of times in a single test tube.
      3. Again, people are really working on this (for example, Ron Weiss).
    3. Evolution / memetics:
      1. If a few members of a species or a culture find a good / winning solution, the entire species or culture may eventually adopt that solution.
      2. This works much faster in memetics (ideas) then genetics (requires waiting for children to grow up.)
  8. Trying to come up with good design rules is like memetics.
    1. This is why sometimes programmers can chance on the right answer to a question even sooner than they can understand how to solve the question.
    2. Languages like java are designed to give you a heavily pruned search space.
    3. Using good technique makes it likely you will find good answers!
  9. Using very limited search of known good strategies is the key to several other kinds of AI:
    1. Expert Systems
    2. Production Rule Systems (see ACT-R, Soar)
    3. Reactive AI (that's my web page on the subject!)

VI. Summary

  1. A lot of what we do (with our brains as well as our computers) can be thought of as search.
  2. Games are a "toy domain", but can help us understand why search is hard, and how we might be solving other problems.
  3. Real brains probably solve many things in parallel, but they are still combating combinatorics.

page author: Joanna Bryson
2 March 2004
, drawing of breadth- & depth-first search added 13 March