CSCI 373 Project 2: A Simple Goal-Based Agent
Due: April 6
Overview: The wumpus world agent in the first assignment is unable to search into the future
and see the effects of a sequence of actions. In this project you will implement a variant of the
Simple-Problem-Solving-Agent in Figure 3.1 of your text. You will use one of the search
strategies implemented in the AIsearch library written by Peter Bouthoorn. That library is
available in the course directory: /usr/local/csci/373/AIsearch.
As depicted in Figure 3.1 of your text, the process() function in your new goal-based agent
should be modified to:
- Identify a goal using the information in its current state.
- Generate a sequence of actions to achieve that goal.
- Execute the sequence of actions, one action at a time, updating its state after each action.
- When the sequence of actions has been executed, re-evaluate its current state and formulate a
new goal.
- Repeat steps 2 through 4 as necessary until the game is over.
To support this new version of the process() function you will need to add the following
functions to your goal-based player:
- formulate_goal() to formulate and return a goal, call it "currentGoal", using the information in
the current state.
- formulate_problem(currentGoal) to formulate and return a search problem, call it
"currentProblem", given the desired goal and the current state. In other words, to formulate a
search problem by creating a start state from the current state and a goal state from the desired
goal.
- search(problem) to perform a search using one of the search strategies implemented in
AIsearch. The purpose of the search is to identify a sequence of actions leading to the goal state
formulated in step 2 above. The search will only have access to the information stored in the
agent's current internal state. This function returns a sequence of actions leading from the start
state to the goal state, or NULL if the goal state can not be reached.
- recommendation() to select the next action that the Agent should perform based on the
information in the current state and the sequence of actions formulated in step 3 above. An
acceptable implementation of the function recommendation() would simply return the next
action in the action sequence formulated in step 3 above.
The AIsearch library offers a large number of search classes that you can use to develop the
search() function described in step 3 above. To use this library, you need to define two derived
classes: a Node class, and a Search class.
The Node class should be similar to the State class in that it contains information about the
wumpus world and how it changes at each step of the search. The Node class will be a derived
class that must contain the implementation of three virtual functions:
- int operator==(const Node &) const;
This function is used to compare two search nodes. Nodes are compared for one of two
reasons: (1) to assure that duplicate nodes are not included in the search path, and (2) to
determine when the goal node has been reached
- void display() const;
This function is used to print a node to STDOUT.
- Node *do_operator(int);
This function is used to generate the children of a node, one node at a time. One child is
generated by each legal move.
<\ul>
The Search class is trivially derived from one of the classes in the AIsearch library.
Programming Intensive Project:
You will be given an example agent program (including supporting classes) and asked to
implement new versions of the following:
- the formulate_goal() function for the GoalPlayer Class
- the formulate_problem(goal) function for the GoalPlayer Class
- the Search class
- the Node class
Test Intensive Project:
You will be given an example agent program (including supporting classes) and asked to
implement new versions of the following:
- the formulate_goal(state) function for the GoalPlayer Class
- the formulate_problem(state, goal) function for the GoalPlayer Class
- the Search class
- the operator==(const Node &) function for the Node class
The above changes are the minimum requirements for this assignment. In addition to the
defining the Agent's goals and the search strategy in the code changes described above, you
could also improve the Agent's performance by introducing changes that would allow him to
recognize and react correctly to multiple wumpi and multiple pieces of gold. The example agent
does not do this. Other changes are, of course, also acceptable and encouraged.