# Sample Exam Questions

### Question 1

State and discuss the seven desirable properties of mechanism design

### answer

Convergence/guaranteed success
executing the mechanism should neccessarily result in its eventual termination, without this property negotions may continue indefinitely.
Maximising social welfare
The mechanism should lead to the best possible total outcome for all parties. (i.e. the sum of utilities of all agents should be maximal)
Pareto efficiency
A mechansim is pareto efficient if it tends to lead to outcomes where no agents benefit at the expense of other agents. An outcome is defined as pareto optimal if all outcomes would lead to a lower utility for any participant.
Individual rationality
A mechanism supports individual rationality, if it is always in the best interests of participants to comply with the protocol, that is a participant is never forced to act in such a way as to ultimatly reduce its own utility. This property should apply to the protocol as a whole, and may not apply in intermediate steps (i.e. an agent may be forced to reduce its utilit
Stability
A mechanism is stable if all participants are compelled (by their expected utility) to behave in a prescribed way, this property allows agents to model the outcomes of protocols in a predictable way. An example of stability is Nash-Equilibrium, whereby one strategy is dominant over others, such that not following that strategy will lead to a reduction in expected utility.
Simplicity
This property may be applied to both the design process of protocols (i.e. designers should choose simpler protocols over more complicated ones with the same behaviour) for obvious reasons. The property may also refer to the computational complexity of decisionmaking witihin the mechanism such that it is tractable for a participant to decide a strategy within the mechanism.
Distribution
Mechanisms should where possible avoid depending on centralised decision making or single points of failure (i.e. a centralised arbitrator), and should take into account the capabilities of and limitations of distributed communication such as bandwidth (i.e. reducing the number of messages exchanged between participants) and communication failures.
Robustness
In the event of a failure of a component (i.e. network failure) , or particpant (i.e. software crash) of the protocol the remaining participants should be able to recover, or resolve the protocol in such a way as to adversely affect the utility of any of the remaining participants.

### Question 2

Describe the parameters defining English, Dutch, first-price sealed bid and Vickrey auctions and where appropriate give the dominant strategy.

### answer

English
• first-price (highest bidder wins, and pays highest price),
• open outcry (participants declare bids to everybody)
• ascending (bid price increases)
• Auction terminates when no bids are received in a given time (i.e. no guarantee of duration or termination.
• Dominant strategy (when agents valuation of goods is certain and other agents valuations are unknown) is to increase price in small increments until agent wins or it reaches their valuation, when the agent withdraws.
Dutch
• first-price,
• open outcry,
• descending (bid price decreases at fixed increments untill lower reserve is met or price reaches zero)
• In some cases bid collisions (where two bids are received simultaneously) leads to a resolution process where price is reset and decreased at smaller intervals with colliding bidders.
• Wihtout collision resolution maximal duration may be decided, with bid resolution duration may not be decidable.
• There is no dominant strategy
First-price sealed bid
• first price
• one shot. (only one round)
• Dominant strategy is to bid less than true valuation.
Vickrey
• second price (Highest bidder wins, but pays the price of the second highest bid)
• one shot. (single round)
• Dominant strategy is to bid true valuation.

### Question 3

Outline the monotonic concession protocol and the Zeuthen strategy

### Question 4

Discuss the problems that arise from the use a valuation function that reduces a multi-attribute offer to a single value. Illustrate your answer with an example valuation function and appropriately chosen offers.

### answer

It is hard to differentiate between offers based on multiple criteria. For example, if the valuation function is f(x,y) = 0.2x+0.8y and there are two offers (1,1) = 1 and (2,0.75) = 1, they cannot be differentiated.

### Question 5

State and justify The four properties of institutions that provide a basis for trust and security in agent interaction.

### answer

decrease uncertainty
because purpose of the institution is to enable a particular kind of interaction, the probability is increased that only agents with co-incident goals will enter the institution
reduce conflict of meaning
the domain of discourse is largely pre-determined by the shared institutional goals, so that alternative meanings are largely precluded.
create expectations of outcome
the purpose of interacting within an institutional framework is because of a desire to achive a certain goal or set of goals, hence objective is shared
simplify the decision process
the possibility space is pruned through the restricted domain and through the shared goal(s), so there are fewer options to consider.

### Question 6

Name and describe the four classes of norm as they occur in agent-based systems.

### answer

ABSTRACT NORMS
vague desires, possibly contradictory, probably unenforceable
CONCRETE NORMS
abstract norms grounded in ontology, open to reasoning
RULES
concrete specifications of (un)acceptable actions, and obligations, declarative style
PROCEDURES
concrete specifications of sequences of actions, options, outcomes

### Question 7

Consider the institutional framework of the lecture:
1. Identify some roles, scenes, and conversations (precise formal specifications are {\em not} required), but you may use diagrams to illustrate your answers.
2. Propose an abstract norm, a concrete norm and a rule that might govern behaviour in a lecture.

### answer

1. Although this situation was used as illustration in a lecture, this part of the question is primarily aimed at getting students to apply knowledge in a new (ish) situation.

Roles: teacher, learner, collaborator

Scenes: presentation, discussion, question, group-working

Conversations: a simple protocol situated in one of the above scenes and illustrating a possible interaction

2. abstract: treat others with respect, as you would wish to be treated yourself; learning is useful

concrete: listen while others are speaking; don't put other people down; engage in the topic being taught; ask relevant questions; rule: if you want to ask a question, raise your hand; if your mobile rings then switch it off;

### Question 8

Speech acts typically have two components: name them and then list five different kinds of speech acts along with illustrative examples.

### answer

A speech act consists of a performative which indicates the intention of the speech act and a proposition.

kinds are:

representative
inform the recipient
directive
get recipient to do something
commisive
commit speaker to do something
expressive
speaker expresses mental state
declarative
speaker announces something

### Question 9

Negotiation strategy can be characterized by the initial offer an agent makes and the way in which an agent changes its reaction to offers over time. Describe in detail the key aspects of the {\em monotonic concession} and the {\em Zeuthen} strategies.

### Question 10

Contract net is a very structured form of negotiation used to achieve task sharing. Outline the five phases of the contract net protocol and what problems may arise in practice.

### answer

1. Problem recognition: an agent (initiator) identifies a task that it is unable to fulfill (and a set of agent who may be able to fulfill the task)
2. Task Announcment: initiator broadcasts a "Call For Proposals" message to potential actors. This message contains a description of the problem.
3. Task Analysis/Bidding: participants analyse the task and determine if they are capable and willing to perform the task, if they are they send a "Bid" describing their capabilities which are relevant to the task and under what terms they are willing to fulfill the request.
4. Contacting: The initiator selects a bid from the set received based on its own preferences, and allocates the contract to the winning bidder (simultaneously notifying other bidders of their failure to contract). If no bids are received the protocol ends.
5. Task Completion: Once the winning bidder receives its winning notification it performs the task and notifies the initiator when it is complete

Example problems include: task specificaiton, QoS, choosing between competing offers, differentiating between offers based on multiple criteria.

### Question 11

A grass-cutting robot can perceive the presence of grass or null (meaning just something else) and has the following repertoire of actions:

• forward
• cut
• turn

The goal of this robot is to wander around the garden, which comprises grassed areas and paths, cutting grass whereever it finds it. To specify the agent behaviour, you may use the following three domain predicates:

• In(x,y) robot is at (x,y)
• Grass(x,y) there is grass at
• Facing(d) robot is facing in direction d

Now answer the following questions:

1. Draw a diagram of the garden in which the robot is going to operate and then Write down and explain the rules you would use
1. to control cutting
2. to control movement

You may assume the garden is rectangular with the farthest corner at (maxx, maxy).

2. Discuss how you would extend the rule set, percepts, actions and/or predicates to prevent the robot from moving on to flower borders.
3. Do you feel this is an appropriate architecture for the task or would a subsumption architecture be more suitable? Consider how Steel's Mars explorer model might be adapted for this environment. Statement of rules is not required.

### answer

1. In(x,y) & Grass(x,y) -> Do(cut)
and
In(0,0) & Facing(north) & not(Grass(0,0)) -> Do(forward)
In(0,0) & Facing(south) & not(Grass(0,0)) -> Do(turn)
In(0,0) & Facing(east) & not(Grass(0,0)) -> Do(forward)
In(0,0) & Facing(west) & not(Grass(0,0)) -> Do(turn)
and similarly but differently for other corners
and
In(0,x) & Facing(west) & not(Grass(0,x)) -> Do(turn)
In(0,x) & Facing(d) & not(Grass(0,x)) -> Do(forward)
and similarly but differently for the other boundaries

2. Looking for general evidence of thinking. Hope no one proposes hard-coding the border edges in the same way as the limits of the garden. Simplest solution is probably to arithmetic in the Grass predicate: In(x,y) & Facing(north) & not(Grass(x,y+1)) -> Do(turn) and similarly but differently for facing other directions. Other solutions might involve introducing a new percept earth and perhaps a new predicate Look. Good student might also remark upon the problem of ordering the rules to get the right results and how this gets increasingly difficult as the size of the ruleset increases
3. More evidence of thinking required. Could posit the existence of a radio transmitter in the centre of the garden (doesn't have to be exactly in the centre tho, or really even anywhere near the centre!), then robot can use the gradient effect for navigation (and depositing the grass clippings, so best put the transmitter in the compost heap). The radio-active crumbs should be used now not to reinforce a particular path, but instead to make a robot go in a different direction.

### Question 12

Name and characterise the commonly agreed properties of an agent.

### answer

Key properties (agents may assume a subset of these):
Autonomous
This is the key property differentiating agents from say objects, agents should have the capacity to decide for themselves whether or not to take a given action. Agents may be proative (Taking action without explicit external stimulous) and/or reactive (only taking action based on external events) in both cases there is a causal decoupling between elements or events in the environment and the agents actions (i.e. the agent is in some way responsible for its own actions).
Situated in an Environment
Agents are assumed to have some external environment with which they interact. This may be the phisical world, a software simulation or a set of other agents or some other system. The key is that there are things outside the agent which it interacts with that it does not have immediate control over.
Communicative (Also Social Ability)
In the field of software agents a key property is typically the ability of an agent to communicate and exchange instructions and information with other agents.
Other valid properties which may apply to some but not all agents:
Mobility
The ability for an agent to move between (sub) environemnts on different machines at different physical localtions
Veracity
It is often common to assume that agents do not give out information which they know to be false
Rationality
Agents decisions may be supported by a sound reasoning
Learning/adaption
Agents may adapt and changee to deal with changes in their environment

### Question 13

Constrast the costs and benefits of algorithmic, declarative and utility-driven approaches to programming an agent.

### answer

Algorightmic
• + Simplest (for designer)
• + Easy to understand
• + possible to determine exact behaviour
• - inflexible (hard to deal with interruptions/exceptions/changes)
• - cannot be integrated with learning easily
Declarative
• + individual parts of program are easy to understand
• + Statements may have a higher-level of generality than imperative programs: higher signal/noise ratio
• + Rules may be combined/re-used with little or no modification
• + May be formally grounded
• + New rules may be integrated on the fly ~learning
• - reasoning may be costly/intractable
• - need to manage rules/knowlege to avoid bloat
• - May be hard to understand complete behaviours
Utility-based
• + agent planning simple - just do what least to highest utility out of available actions
• + Allows for flexible behaviours, plans can be immediately abandoned if expected utility changes or high utility action is found
• + Suitable for learning algorithms, use utility as means for guaging/improving performance.
• - Difficult to estimate/model utility in many situations
• - Problems with local minima, may lead thrashing between different activities.

### Question 14

Outline the subsumption architecture and explain its strengths and weaknesses

### answer

• The subsumption architecutre is build using a set of behaviours.
• The behaviours operate in parallel, each takes some sensory and may generate some output (by 'firing') which indicates some action
• The behaviours are composed into layers using an "inhibition relation" which provides a strict total ordering over them, such that a lower-ordered behaviour will override a higher-ordered behaviour.
• Action selection occurs at a given time, by taking all behaviours which have fired, and looking to see if a lower-order behaviour has also fired, thus overriding the output of the orignal behaviour
• The subsumption architecture is representation-free (that is no models of the world are created)
• Action selection in the subsumption architecture has a low complexity (O(N2, based on the maximum of the number of percepts or the number of behaviours), so actions can be typically be selected very quickly.
• Despite the simplicity of the components it is possible for complex behaviours to emmerge caused by interactins between layers
• Since no world model is created, agents must have suffient stimulus in order to take actions, it is possible for sensory starvation to cause the agent to stop an agent from doing anything. - The incorporation of internal stimuli (such as timed transitions inside the behaviours) mitigates this to a certain extent.
• Because agents have little or no state, it is difficult to see how agents might take into account non-local stimulus (such as historical events).
• Adapting behaviour and learning is difficult to incorporate
• Complex behaviour is an emmergent property of the interacting individual behaviors, and as such is difficult to engineer
• It is difficult to build agents with many layers, as the interactions between layers become too complex

### Question 15

Describe the key characteristics of Steels's Martian explorer subsumption architecture and explain how it worked.

### answer

• There are a number of exporer agents and a mother ship
• The task of the explorer agents is to explore a distant planet and to collect samples of a particular type of precious rock.
• No map of the planet is available, but the terrain is known to be such that radio communication between explorers is unlikely to be possible
• The mother ship generates aa radio signal which does not pass any information on to the explorers, but does allow them to locate the mother ship using the gradient of the signal strenght (i.e. the stronger the signal the nearer the mother ship). Agents may then move towards, or away from the mother ship using the signal
• Agents communicate indirectly using radioactive crumbs which may be dropped, picked up and detected by passing robots
• The agent is governed by the following 6 behaviours
1. if detect obstacle then change direction
2. if carrying samples and at the mother ship, then drop samples
3. if carrying samples and not at the base then drop two crumbs and travel up signal gradient (toward ship)
4. if detect a sample then pick a sample up
5. if sense crumbs then pick up 1 crumb and travel down gradient (Away from ship)
6. if true then move randomly
These are composed as follows (1 being the lowest order (highest priority))
1 < 2 < 3 < 4 < 5 < 6
• In the absence of any stimulus the agent explores randomly (rule 6), When an agent finds a sample it will pick it up and return to the mother ship and drop them (rules 4,3 & 2). The crumbs serve to reinforce paths to places where samples have been found (rule 5, agents without samples at the mother ship will tend to return to places where samples have beeen found by following the paths of crumbs, however if they find no samples, these paths will tend to deteriorate as the crumbs are picked up but not replaced.
• Benefits of the architecture are: that it is robust (agents are not dependant on each other for action, simple (the rules can be calculated efficiently in real time), and effective (in many cases the architecture leads to near-optimal performance).

### Question 16

Explain how utility functions were used in the TileWorld.

### answer

Tileworld consists of a two dimensional grid, each cell in the grid may contain nothing, a tile, a hole or an obstacle. An agent is placed on an empty cell of the grid, where it may move up, down, left or right. It may not move into cells containing holes or obstacles, agents may move into cells containing tiles, if the cell beyond the block in the direction of movement does not contain an obstacle. If the cell beyond the tile contains a hole then the hole is filled by the tile, and becomes empty. Depending on the paramaters of the experiment, tiles, holes and obstacles may appear with given probabilites at a given time, and my dissapear at a later time. The experiment is run by placing the agent at a point determinied by the experimentor (or at random) and running the envirnment for a given number of steps.

The utility of a an agent's run (i.e. over the duration of the experiment) is defined as a function as follows:

            number of holes filled in r
u(r) = ----------------------------------
number of hoes that appeared in r

The utility over a run is used to determine an agents performance in that run, where a utility of 0 indicates that the agent failed to fill any holes, and 1 indicates that the agent filled all holes.

An optimal agent in tileworld is defined as one which maximises its expected utility over all possible runs. That is in any given environment, the anent always makes decisions which are most likely to lead to the maximum utility. An example might be abandoning the decision to move toward a distant tile, in favour of a nearer one which is less likely to dissapear by the time the agent gets there, or moving to distant regions with more tiles and holes instead of trying to fill local holes will fewer tiles.

### Question 17

Explain the purpose of agent communication languages and agent content languages, explaining why both are necessary and giving examples of each.

### answer

Agent communication languages are 'outer' languages which are intended to express the general semantics, force and purpose of a mesasge, they typically also indicate the sender and intended receiver (and/or actor to which the message applies), and other properties of the message. Communication languages are typically build based on Searle's Speech-act theory, using a fixed set of Performatives, which characterise a mesage into one of a set of commonly identified interaction types, performatives are common to all agents using a given commnucation language. Additonally mesages contain some content expressed in a Content Language which is given meaning by the enclosing performative. Examples include KQML and the FIPA ACL.

Content languages are used to express properties and queries in a given domain to which the message is relevant. Content languages are typically domain dependant, their structure and semantics are defined by an ontology. Examples of content languages are KIF and FIPA-SL.

### Question 18

Why is Vickrey's protocol effective and what are its advantages? In what circumstances is its use not advisable?

### Question 19

Describe the BDI architecture (define the acronym BDI first) and how this is realised in the PRS system.

### answer

BDI stands for "Belief Desire Intention" and applies to a mechanism for modeling agents 'mental states', Beliefs correspond to things which the agent knows to be true at the curren time, Desires to the agents overall goals (goals which should be maintained until they are acheived or can never be acheived) , and Intentions to the agents immediate planned actions. Within PRS a set of fixed plans are defined which indicate a (possibly structured) sequence qof actions (which may be conditional on certain beliefs (pre-conditions)) and their post-conditions. The agent is given one or more goals to acheive, the first of these is pushed onto the intention stack. The agent then searches for a plan or sequence of plans which should result in the goal being acheived (delibuand pushes these onto the intention stack untill a plan whos pre-conditions match the current beliefs is found. The actions specified by the top plan are executed until either the plan is acheived or the beliefs of the agent no longer met by the preconditions of the top plan on the intentions stack, in which case a new plan is calculated.

### Question 20

How might a plan library be of use and be used by an agent?

### answer

Typically planning agents do not build plans from scratch, isntead a plan library is used, consisting of a pre-defined set of plans. Each plan in the library is typically expressed in terms of a set of pre-condidions must be satisfied for the plan to be implemented, a set of actions and a set of post-conditions which will (probably) be satisfied when the actions are executed.

Plan selection within an agent can be done by running though the plans and find those who's post-conditions unify with one or more of the agents intentions, and whos pre-conditions unify with the agent's current beliefs.

### Question 21

Sketch the prototypical structure of a negotiation protocol.

### Question 22

Sketch the structure of a deductive reasoning agent, illustrate with an example and discuss the limitations of the logic-based approach.

### Question 23

In what situations might mobile agents be appropriate technology, what are the major technical issues created by them and how might they be resolved.

### Question 24

Why is modal logic an excellent logic for modelling multi-agent systems?

### Question 25

Explain action selection when you consider an agent as a theorem prover

### Question 26

Consider the following situation: We have a small robotic theorem proving agents that needs to find an objectin in a room (a grid of 3x3) with obstacles. Your agent is only capable of moving forward, turning 90 degress clockwise, and picking up the object.
1. What are the domain predicates
2. What are the actions of your agent
3. Provide an initial environment
4. Provide the rules necesarry for your agent to accomplish its goal (finding the object) in your initial environment. Make sure that this includes at least 5 rules and involves all of the possible actions your agent can perform. Make sure that your rules are written in a general form that would allow them to be used with other initial environments.

### Question 27

Describe means-end reasoning.

### Question 28

What are hybrid systems? Elaborate on the various types of layering.

### Question 29

Find the Nash equillibria of the following games:
1.  C D A 2,1 0,0 B 0,0 1,2
2.  D E F A 3,2 3,1 4,1 B 2,3 4,0 4,2 C 1,2 2,5 1,7
3.  C D A 7,2 3,1 B 8,5 2,6

### Question 30

Remove the dominated strategies from the following games. Show each step.
1.  D E F A 2,3 3,5 2,2 B 3,2 5,3 4,5 C 4,3 4,4 3,4
2.  E F G H A 6,5 7,4 4,3 5,6 B 5,7 4,6 5,5 4,7 C 6,6 8,4 3,3 7,5 D 4,7 3,2 2,1 3,8

### Question 31

1. What are the nash equillibria for the following extensive game with perfect information:
                      1
/    |    \
/      |      \
/        |        \
(2,1)/     (1,1)|     (0,2)\
/            |            \
2              2              2
/   \          /   \          /   \
y1    n1       y2    n2       y3    n3
/       \      /       \      /       \
2,0      0,0   1,1      0,0   0,2      0,0

2. Give an example of a game that produces unintuitive Nash equilibria. Explain Why.
3. Explain backward induction in the context of equilibria for extensive games with perfect information

### Question 32

Explain how model logic extends classical logic. Give both syntax and semantics.

### Question 33

How can modal logic be interpreted as an epistemic logic for multi-agent systems?