Chapter 2: Intelligent Agents

Recall the focus of the course:

Focus of this lecture:



Key terms:

What are agents:





Goal:





The question of empirical validation:

The measure of success (rationality) must depend on knowledge; can't require omniscience.

What is rational at any given time depends on 4 things:

1.The performance measure

2.The percept sequence --- the complete perceptual history of the agent so far

3.Whatever built-in knowledge the agent has

4.The actions that the agent can perform



The ideal rational agent: does the action that is expected to maximizes the performance measure on the basis of the built-in knowledge and the percept sequence

Autonomy:

Agent Design --- the agent program:





Mapping from percepts to actions by table lookup:


function: Table-Drive-Agent(percept) returns action

static: percepts, a sequence, initially empty

table: a table, indexed by percept sequences, initially fully specified

append percept to the end of percepts

action <- Lookup(percept, table)

return action


Figure 2.5



Example: navigating while shopping



Simple Reflex Agent:




function: Simple-Reflex-Agent (percept) returns action

static: rules, a set of condition-action-rules

state <- Interpret-Input(percept)

rule <- Rule-Match(state, rules)

action <- Rule-Action[rule]

return action


Figure 2.8





Reflex Agent with Internal State


function: Reflex-Agent-With-State (percept) returns action

static: state, a description of the current world state

rules, a set of condition-action-rules

state <- Update-State(state, percept)

rule <- Rule-Match(state, rules)

action <- Rule-Action[rule]

state <- Update-State(state, action)

return action


Figure 2.10









Goal-Based Agent

Utility-Based Agent



Examples of different types of agents: See Fig 2.3 in text



Connection to search and knowledge representation:

Environment properties (depends on how conceptualized)

1.Accessibility

2.Deterministic

3.Episodic vs. non-episodic

4.static vs. dynamic: will the world change while thinking

5.discrete: a finite number of percepts and actions



Accessibility?

Deterministic? The next state is completely defined by the current state and the agent's actions.

Episodic vs. non-episodic:

Static vs. dynamic:

Discrete:



Summary of Agents

Web site to check:





Homework:

      1. Accessibility
      2. Deterministic
      3. Episodic vs. non-episodic
      4. static vs. dynamic: will the world change while thinking
      5. discrete: a finite number of percepts and actions