Summary of Chapter 3 Continued
Iterative Deepening continued
- In essence, iterative deepening pretends that nodes at the cutoff limit, n, have no successors. If you fail to find a goal when the limit is n, use limit n+1 and repeat the process.
Iterative deepening continued
- IDS is complete and optimal
- Space complexity is O(bd)
- time complexity is a little worse that BFS or DFS but still O(bd)
- IDS is the preferred strategy when the search space is large and the solution depth is unknown
State space vs. search tree
Summary of uniformed search
- Recall the basic tree search algorithm; in this algorithm, the only place knowledge is applied is in the queueing function:
- BFS: new nodes are added to the end of the queue
- DFS: new nodes are added to the front of the queue
- Uniform-cost search: the queue is sorted according to path cost
Assignment
Your first programming assignment is to write a
Reflex-Agent-With-State, as in Figure 2.10 of your text, to
act in the Wumpus World. This is a relatively involved project and it
is therefore broken for into sub-assignments as follows:
- Part 1: Due Wed. 11, 1998:
Define a set of
condition-action rules to guide the agent's choice of
action at each step of the game
- In formulating these rules, you will use information from an
internal state representation that you will write later. Indicate the
required state information by designating the name of a function that
will provide that information. The function name should be indicative
of what that function returns. For example, a possible rule for the
GRAB action is:
if ( !holdGold() && CurrentPercept().isGlitter() )
return GRAB;
Identifying the information that you need from the internal state
will help you design that state representation. Be realistic about
the functions that you use in formulating your
condition-action rules; do not write rules that use
functions that would be exceeding difficult or impossible to
write.
- Suggestion: Write one or more rules to trigger each of the Agent's
possible actions. E.g., a rule that specifies when the agent should
GRAB, a rule for CLIMB, one or more rules for GOFORWARD, etc. If it
is possible for more that one rule to apply in a given configuration,
you could order the rules in terms of priority so that the highest
priority rules are triggered first.
- Part 2: Due Mon 16, 1998:
Define the internal state representation.
- The internal state should be written as a class; in Part 2 you
are asked to write the definition file (i.e., the ".h" file) for this
class. The class should be written in such a way that state
information can be retained from one trial to the next. In writing
this file, be sure to think about how you will get the information
that you need from the simulation code. Be sure to specify the
correct types for all parameters.
- Part 3: Due Mon 23, 1998:
Write the Reflex Agent.
- In your csci/373/wumpus directory create a sub-directory called
ReflexAgent and copy the Makefile from the directory
usr/local/csci/373/players/alpha/HumanPlayer into your new ReflexAgent
directory. Use this directory to implement your ReflexAgent player.
This player should use the State class from Part 2 to maintain an
internal state. It should also use the condition-action
rules from Part 1 to select the action that it will perform at each
step. See the bullet entitled Requirements for your player
notes, in the Feb 2 lecture notes for a discussion of the
requirements for the ReflexAgent class. You should use the
HumanPlayer class as a guide.
- Your ReflexAgent should compile using the Makefile specified above
and run in the Wumpus World simulation. Your agent will be tested in
multiple trials in environments ranging from 4 to 8 squares.