Last Modified 10 March 2005. Project last worked on in 1994. Fire comment originally from January 2003.

The Reactive Accompanist

This is my Masters degree work conducted at the University of Edinburgh Department of AI. I built a behavior-based, reactive system for providing chord accompaniment to unfamiliar folk melodies. The work is based on Prof. Brooks' subsumption architecture. As far as I know, it was the first thing to use SA outside of mobile robotics, but I know of a few larger and more recent projects applying SA to other fields.

I'm meaning to set up a separate page for this along with sound files, etc. but for now [May 1994] here's the dissertation, The Subsumption Strategy Development of a Music Modelling System. In postscript, compressed, with figures. Additional figures were photocopied in. If you find the blank pages annoying, email me with your address and I'll probably be so flattered you wrote I'll send you copies.

I and my MSc. supervisors, Alan Smaill and Geraint Wiggins also wrote a technical report summarizing the thesis and discussing non-robotic applications of subsumption architecture:  The Reactive Accompanist: Applying Subsumption Architecture to Software Design. You can also find it under AI and Music at Edinburgh. This paper is temporarily(?) inaccessable due to the Edinburgh fire.  Since `temporary' has lasted for over two years, here's a draft version I still had the latex for. 

Many people asked me how I did the behavior decompositon on this project. This question helped lead to my PhD topic. I also wrote a chapter on the topic for Luc Steel's The Biology and Technology of Intelligent Autonomous Agents, (Springer 1995) The Reactive Accompanist: Adaptation and Behavior Decomposition in a Music System . This is the first place I published the suggestion that adaptive requirements can be used for determining how to decompose intelligence into modularized behaviors. This idea is now a key component of Behavior Oriented Design.

The Reactive Accompanist also became one of the main demos for attracting students to doing the AI program at the University of Edinburgh. This is presumably because, on hearing the sound files, students are sure they could do better! My apologies to anyone who was subjected to that demo for too long. The original demo I wrote allowed multiple versions of multiple songs to be played, or the music to be turned off. But the PC version that got made after I left cut a lot of features out and drove a lot of people crazy. We are still looking for a new MSc student to make the system run in true real-time, since now the machines (and FFT programs) are up to it.

For the real masochists, here's the original source code (compressed tar file) in Gnu's C++ circa 1992.

Related Work

I have other pages about my work and related reactive and behavior-based research. Here is a short list of music and AI stuff that's related.
  • Christopher Raphael's Music Plus One
  • Belinda Thom's Interactive Improvisational Companion
  • The Interactive Music Network is trying to bring music into interactive multimedia.  I'm not sure if that includes AI music but I think so.

  • page author: Joanna Bryson