Chapter 4 Continued
Constraint satisfaction problems
- States are defined by the values of a set of variables; the goal test specifies constraints on the variables --- the objective is to find variable values that satisfy all constraints
- the order in which variables are assigned values does not matter --- the solution path does not matter
Examples: cryptarithmetic, 8-queens problem,
layout problems in general
Constraint propagation
- The best approach is to avoid making guesses on particular assignments until necessary. Constraints are propagated as far as possible to reduce the number of allowable guesses --- curtail the search.
- Constraint propagation arises from dependencies among constraints. E.g., in cryptarithmetic, 1 constraint may specify that N = E +1 and another that E = 2, by propagating these 2 constraints we get the additional restriction that N = 3
Constraint propagation continued
- constraint propagation terminates when: (1) a contradiction is dectected, or (2) no further changes can be made on the basis of current knowledge then search (guessing) begins
- Useful heuristics can help select the best guess to try first; these heuristics will be domain specific
Example problem: 8 queens
- Definition: place 8 queens on a chess board in such a way that none threatens any of the others (a queen threatens the squares in the same row, same column, and the same diagonal)
- Guess a solution: there are 4,426,165,368 possible guesses (the state space for the problem).
8 queens - constraint propagation
- 1st constraint: never put 2 queens in the same row; 16,777,216 possible guesses
- 2nd constraint: never put 2 queens in the same column; 40,320 possible guesses
- Now search for the configuration that satisfies the diagonal constraint; the leaves of the search tree are either solutions or dead ends
- Search is aided by good heuristics, e.g, the min-conflicts heuristic (book gives others)
Search as applied to constraint satisfaction problems
- The path to the solution does not matter. Could use best-first search guided by heuristic function to do the search (Why not A*?).
- Unfortunately the branching factor is large for most CSP’s and often CSP’s require searching to considerable depth --- space complexity becomes a problem!
Hill-Climbing Search
- Hill-climbing can be viewed as a varient of best-first search --- memory limited best-first search
- hill-climbing does not keep track of any previously encountered nodes except the best so far:
current <= Make-Node(initial-state)
loop
node <= highest valued successor of current
if value(node) < value(current) return current
current <= node
end loop
Why call it hill-climbing?
Simulated annealing
- Named after a metal-casting function technique: molten metal is gradually cooled; gradual temperature decrease results in even distribution of molecules
- Basic idea: instead of picking the best move, pick a random move; if it improves the current situation do it, otherwise take the move with some probability less than 1.0; probability decreases exponentially with badness of move.
Simulated annealing continued
- Define a temperature function that decreases with time. At each move, compute the current temp., t, and use t to determine the probability of taking a bad move. When t=0 you are doing hill-climbing.
Another constraint satisfaction problem: natural language processing (NLP)
- NLP can be viewed as a constraint satisfaction problem
- Four basic types of constraints: syntactic, semantic, discourse, and pragmatic
- Unfortunately even when all constraints are considered it is still not possible to avoid guessing and searching
Examples of natural language processing tasks
- Examples of natural language processing tasks
communication with artificial agents in human language --- perhaps a distant goal
text summarization
machine translation
- What makes NLP so hard:
The meaning of an utterance depends on both the language used in the utterance and the context in which it was spoken
Language suffers from ambiguity
- As a result, NLP problems have:
- require lots of world knowledge --- very large scale
- huge state spaces and big branching factors
NLP continued
- What are the constraints in NLP?
syntatic: derived from the grammar of the language; e.g., word order, number agreement
semantic: derived from knowledge about entities that can exist in the world; e.g., the properties that objects that have --- semantic features and selectional restrictions
discourse: derived from the structure of coherent discourse; e.g., new entities in a sentence must be introduced
pragmatic: derived from context in general; e.g., the meaning of a sentence must be consistent with the social usage and speakers goals: Can you pass the salt?
A specific example
- Paper: “Lexical Disambiguation using Simulated Annealing,” Proc. DARPA Workshop on Speech and Natural Language, 1992.
- The NLP task: determine the correct meanings of the words in a sentence
- Constraints considered: semantic constraints as defined by the dictionary definitions of the words being considered
- search technique: simulated annealing
Summary of search techniques
- problem definition in terms of: states, goals and actions
- uniformed vs. informed search: information helps
- can compare strategies on the basis of: completeness, optimality, and time and space complexity
Some problems with state space search
- Every operator is equally important (maybe not for weighted links)
- Must construct unbroken sequences --- assumes the world is static
- Heuristic functions choose among states not actions
- Can’t modify actions or heuristics
- To overcome these we need more complete information --- Knowledge Representation