Unit 4 Wrap-Up and Self-Assessment

Unit 4: Optimization and Constraint Satisfaction — Wrap-Up

This unit introduced two paradigms for solving problems that are too large or too structured for the systematic search algorithms of Unit 3. Local search finds good (often near-optimal) solutions to continuous and combinatorial optimization problems by iteratively improving a single state. Constraint satisfaction finds valid solutions to problems with explicit variable-domain-constraint structure by combining systematic backtracking with constraint propagation and intelligent heuristics. The practitioner’s skill is knowing when each approach applies — and when to combine them.

Key Takeaways

Part 1: Local Search and Optimization

  • An objective function assigns a numerical score to every state; optimization means finding the state with the highest (or lowest) score.

  • Hill climbing always moves to the best available neighbor; it is fast and memory-efficient but gets trapped at local maxima.

  • Random restart mitigates the local maximum problem by running hill climbing multiple times from different starting states.

  • Simulated annealing probabilistically accepts worse moves with probability e^(ΔE/T), where T (temperature) decreases over time, enabling escape from local optima.

  • The cooling schedule (linear, exponential, or logarithmic) determines how quickly T decreases and critically affects SA’s performance.

  • Genetic algorithms maintain a population of solutions and evolve them through selection, crossover, and mutation — effective when solutions have natural building-block structure.

Part 2: Constraint Satisfaction Problems

  • A CSP is defined by variables, domains (possible values), and constraints (rules on combinations).

  • The goal is any assignment satisfying all constraints — not a best solution, but a valid one.

  • Backtracking search assigns variables one at a time and undoes assignments when constraints are violated, far more efficient than naive enumeration.

  • Constraint propagation (especially AC-3) reduces variable domains by detecting and eliminating values with no compatible partner in neighboring domains.

  • MRV (Minimum Remaining Values) chooses the most constrained variable first — a "fail-first" strategy that detects dead ends early.

  • The degree heuristic breaks MRV ties by choosing the variable involved in the most constraints with unassigned neighbors.

  • LCV (Least Constraining Value) orders values to try the option that leaves the most freedom for future assignments.

  • Combining AC-3 + MRV + LCV reduces search by factors of 10 to 1,000 compared to basic backtracking.

Algorithm Comparison Summary

Algorithm Type Complete? Optimal? Space

Hill Climbing

Local Search

No

No

O(1)

Hill Climbing + Random Restart

Local Search

Probabilistically

No

O(1)

Simulated Annealing

Local Search

Yes (log. cooling)

Approaches optimal

O(1)

Genetic Algorithm

Population Search

Probabilistically

No

O(pop. size)

Backtracking (basic)

Systematic Search

Yes

N/A (finds any)

O(n)

Backtracking + Forward Checking

Systematic + Propagation

Yes

N/A

O(n)

Backtracking + AC-3 + MRV + LCV

Systematic + Propagation

Yes

N/A

O(n)

Complete = guaranteed to find a solution if one exists; Optimal = guaranteed to find the best solution.

Decision Guide: Optimization vs. CSP

Question Optimization / Local Search CSP / Backtracking

Goal

Maximize or minimize a score

Satisfy all hard constraints

Solution type

Graded (some better than others)

Binary (valid or invalid)

Solution guarantee

Approximate (SA approaches optimal)

Complete (finds solution if exists)

Problem structure

Continuous or weakly constrained

Discrete with rich constraint graph

Memory

O(1) for SA

O(search depth)

Best for

TSP, neural network training, design

Scheduling, Sudoku, configuration

Concept Map

The key relationships in Unit 4:

Optimization Problems
└── Objective Function (score each state)
    ├── Hill Climbing (greedy improvement)
    │   └── Problems: local maxima, plateaus, ridges
    │       └── Solutions: random restart, stochastic variants
    └── Simulated Annealing (probabilistic escape)
        └── Metropolis criterion: P = e^(ΔE/T)
            └── Cooling Schedule: T decreases over time

Constraint Satisfaction Problems
└── Variables + Domains + Constraints
    └── Backtracking Search (systematic)
        ├── Constraint Propagation (detect failures early)
        │   └── Arc Consistency (AC-3)
        └── Heuristics (choose wisely)
            ├── MRV: most constrained variable first
            ├── Degree: most constrained by neighbors
            └── LCV: least constraining value first

Self-Check: Key Formulas and Algorithms

Test your understanding across all of Unit 4.

Glossary: Unit 4 Terms

Objective Function

A function f(state) → value that assigns a numerical score to every state; the goal of optimization is to maximize (or minimize) this value.

Hill Climbing

A greedy local search algorithm that always moves to the best available neighboring state; terminates at the first local maximum reached.

Local Maximum

A state whose objective value is greater than or equal to all neighboring states, but which may not be the global optimum.

Simulated Annealing (SA)

A local search algorithm that accepts worse solutions with probability e^(ΔE/T), where T is a temperature parameter that decreases over time according to a cooling schedule.

Cooling Schedule

A function specifying the temperature T at each step of simulated annealing; common forms are linear, exponential, and logarithmic.

Genetic Algorithm (GA)

A population-based optimization algorithm that evolves solutions through selection, crossover (combining two parents), and mutation (random change).

Constraint Satisfaction Problem (CSP)

A problem defined by variables X, domains D, and constraints C; the goal is to find an assignment of values to all variables satisfying all constraints.

Constraint Graph

A graph where nodes represent CSP variables and edges represent binary constraints between pairs of variables.

All-Different Constraint

A global constraint requiring that all variables in a set take distinct values; used extensively in Sudoku, scheduling, and assignment problems.

Backtracking

A depth-first search strategy for CSPs that assigns one variable at a time, checks constraints incrementally, and undoes assignments when contradictions are found.

Constraint Propagation

Using known assignments and constraint relationships to reduce the domains of unassigned variables, detecting contradictions early.

Arc Consistency

A property of a CSP where, for every arc (Xi → Xj), every value in Xi’s domain has at least one compatible value in Xj’s domain.

AC-3

The standard algorithm for enforcing arc consistency, running in O(cd³) time; processes a queue of arcs and removes unsupported values from domains.

MRV (Minimum Remaining Values)

A variable selection heuristic that chooses the unassigned variable with the fewest legal values; also called the "fail-first" or "most constrained variable" heuristic.

Degree Heuristic

A variable selection tiebreaker that chooses the variable involved in the most constraints with other unassigned variables.

LCV (Least Constraining Value)

A value ordering heuristic that tries values in order of how few options they eliminate from neighboring variables' domains.

Min-Conflicts

A local search algorithm for CSPs that starts with a random complete assignment and iteratively reassigns variables to minimize the number of constraint violations.

Preview of Unit 5: Introduction to Logic in AI

The search and optimization algorithms of Units 3 and 4 find solutions by exploring possibilities. But human experts don’t just search — they reason. Given a set of facts and rules, they derive new conclusions without trying every possibility.

Unit 5 introduces formal logic as the foundation for AI reasoning. You will learn why formal logic is necessary (natural language is ambiguous), how propositional logic encodes facts and rules precisely, and how AI systems use logic to answer questions and plan actions.

The key connection: constraint satisfaction problems are a special case of logical inference, and many CSP techniques have direct analogues in logical reasoning systems. The structured thinking you developed in Unit 4 — variables, domains, constraints, propagation — transfers directly to the logical knowledge bases of Unit 5.


Based on the UC Berkeley CS 188 Online Textbook and CSP chapter by Nikhil Sharma, Josh Hug, Jacky Liang, and Henry Zhu, licensed under CC BY-SA 4.0.

This work is licensed under CC BY-SA 4.0.