Answer Set Programs

Unground and Automatically Generated

A number of programs in lparse/smodels format, along with encode / decode scripts, etc.

Su-Doku

A su-doku solver, it is very simple and suprisingly quick. It took about 20 minutes to write and about 10 minutes to debug and thus I use it as an example of the power of answer set programming as a problem solving technique. Download (tar ball).

Hitori

A puzzle similar to Su-Doku with an additional conditon that requires sections of the puzzle to be connected (i.e. requires reachability). Two encodings are given, one encodes reachability and grounds to about 250,000 rules for the given example. The second encoding is only just over 1000 rules and is tight but will give answer sets that are not solutions. Now includes a puzzle generator. Download (tar ball).

Hashiwokakero

Another Japanese logic puzzle, sometimes known as "bridges". Now includes a puzzle generator. Download (tar ball).

Knight's Tour

A program the gives solution to a classic chess problem, that of visiting every square of a chess board once and only once using a knight. Download (tar ball).

Solitaire

Solitaire (also known as peg solitare or brainvita) is a simple yet surprisingly difficult and addictive game. The program is for an English board - a cross shape with 'arms' that are 3 pegs wide and 2 pegs long - from a central block of 3 by 3. To start with all but the center whole are filled and with each move a peg is jumped over another (remove it) until only one is left, in the center of the board. Download (tar ball).

Factoring

Derived from the TOAST code base, this is a series of programs that factor increasingly large pseudo-prime numbers (product of two primes). Notably the number of literals is constant but the search space steps up slowly. Download (tar ball).

Ground Programs

Program test sets and benchmarks used in various papers.

RC5

A pair of sets of program that model the RC5 symmetric key cipher, as described in "Applied Cryptography". Note these are not generic benchmarks collections, one set has been designed to have constant program size but varying difficulty and the other has constant difficulty and varying program size. Each program has exactly one answer set, but you should probably set the solver to compute all of them as it removes the element of luck involved. Also, GrinGo or the ground2smodels program may be needed to convert these files to smodels format. Varying size and varying difficulty. A harder set of varying difficulty benchmarks is also available.

Smodels Benchmark Collection

A mixed collection of programs used to assess the speed of various smodels variants. Many of these programs were generated by / taken from the sets listed above. Download (tar ball).

2009 Solver Competition

This tarball contains a set of scripts that should handle input and output to an answer set solver for the 2009 solver competition. It contains the following:

To set up for your own solver, do the following:

  1. Rename your top level directory to match your team name.
  2. Replace topLevel/bin/solver with your solver.
  3. If the solver output is radically different to smodels / cmodels / clasp then you may need to add a pattern to topLevel/lib/parseAnswerSet.pm (If so, please let me know as I try to keep it up to date and I'd love to support more solvers.)
  4. If you want to replace any of the encodings, simply replace the appropriate file in topLevel/encodings/.
  5. To submit solutions, enter the top level directory and run bin/submitAll.sh

Important things to note:

  1. Two problems, GrammarBasedInformationExtraction and StrategicCompanies are not included. The first has (as far as I know), no natural encoding in AnsProlog (currently there is not one in asparagus), the second is of a complexity higher than NP, it has an encoding in asparagus but not all solvers support the necessary constructs.
  2. This is still very much a work in progress.