Lab: Simulated Annealing
Unit 4: Optimization and Constraint Satisfaction — Simulated Annealing Lab (50 points)
Lab Overview
In this lab, you will implement the simulated annealing algorithm from scratch in Python, experiment with different cooling schedules, and compare SA’s performance against hill climbing on challenging multi-modal optimization problems. By the end, you will have a working SA solver and quantitative evidence for when and why it outperforms simpler local search.
Before starting, review:
-
Section 4.1 — Local Search and Optimization: Hill climbing, local maxima, random restarts
-
Section 4.2 — Simulated Annealing: Temperature, Metropolis criterion, cooling schedules
You will implement and test the algorithms described in those sections.
Learning Objectives
By completing this lab, you will be able to:
-
Implement the simulated annealing algorithm from first principles
-
Code the Metropolis acceptance criterion correctly
-
Implement and compare three cooling schedules: linear, exponential, and logarithmic
-
Empirically demonstrate SA’s advantage over hill climbing on multi-modal functions
-
Analyze how temperature and cooling rate affect solution quality
Setup: Open the Notebook
Download the starter notebook:
Open in Google Colab:
-
Click File → Upload notebook
-
Upload the downloaded
.ipynbfile -
Begin coding in the TODO sections
All required libraries (numpy, matplotlib, math) are pre-installed in Colab.
The Metallurgy Analogy
Before writing any code, internalize the physical intuition behind SA.
When a blacksmith anneals metal:
-
Heating → atoms move rapidly and randomly, breaking out of suboptimal arrangements
-
Slow cooling → atoms gradually settle into a stable, low-energy crystalline structure
-
Fast cooling → atoms freeze into whatever arrangement they happen to be in (often suboptimal)
The algorithm maps directly:
-
High temperature → accept almost any neighboring state (explore broadly)
-
Cooling → accept fewer bad moves over time
-
Low temperature → behave like hill climbing (refine the current solution)
Part 1: Basic SA Implementation (20 points)
Function 1: Random Neighbor (5 points)
Generate a random neighboring state by adding a small random perturbation to the current state.
def random_neighbor(current, step_size=0.1):
"""
Generate a random neighboring state.
Parameters:
current (float): Current state value
step_size (float): Maximum magnitude of change
Returns:
float: New neighbor state = current + random change in [-step_size, step_size]
TODO: Implement this function using np.random.uniform
"""
pass
The neighbor should be within step_size of the current state in either direction.
Function 2: Acceptance Probability (5 points)
Implement the Metropolis criterion.
The Metropolis Acceptance Criterion:
P(accept) = 1.0 if neighbor_energy < current_energy P(accept) = e^(−ΔE / T) if neighbor_energy ≥ current_energy
Where ΔE = neighbor_energy − current_energy (positive when neighbor is worse) and T is the current temperature.
def acceptance_probability(current_energy, neighbor_energy, temp):
"""
Calculate probability of accepting a move.
Always accept better solutions (return 1.0).
Accept worse solutions with probability e^(-delta_e / temp).
Parameters:
current_energy (float): Objective value at current state
neighbor_energy (float): Objective value at neighbor state
temp (float): Current temperature (must be > 0)
Returns:
float: Acceptance probability in [0, 1]
TODO: Implement using math.exp
"""
pass
Function 3: Main SA Algorithm (10 points)
Combine neighbor generation, acceptance criterion, and cooling into a complete SA implementation.
def simulated_annealing(objective_function, initial_state,
initial_temp, cooling_rate, max_iterations):
"""
Run simulated annealing to minimize objective_function.
Algorithm:
1. Initialize: current = initial_state, temp = initial_temp
2. Repeat for max_iterations:
a. Generate random neighbor
b. Calculate acceptance probability
c. Accept if random() < probability
d. Update best state if improved
e. Cool temperature: temp *= cooling_rate
f. Enforce minimum temperature (floor at 0.01)
3. Return best state found, its energy, and history
Returns:
tuple: (best_state, best_energy, energy_history)
TODO: Implement the full algorithm
"""
pass
Keep track of two states:
-
current: the state you are currently at (may be worse than the best found) -
best: the best state found so far across all iterations
SA can temporarily move to worse states, so current and best can diverge.
Always return best, not current.
Part 2: Cooling Schedules (15 points)
Implement three different temperature schedules.
Each schedule returns the temperature at time step t.
Linear Cooling (5 points)
T(t) = T₀ × (1 − t / max_t)
Characteristics: Reaches zero temperature quickly; may terminate exploration too early on complex landscapes.
Exponential Cooling (5 points)
T(t) = T₀ × α^t
Characteristics: Most common in practice. With α = 0.95, temperature halves every ~14 steps. Choose α between 0.85 and 0.99 depending on how much exploration the problem requires.
Logarithmic Cooling (5 points)
T(t) = T₀ / log(1 + t)
Characteristics: Very slow; theoretically guarantees convergence to the global optimum as t → ∞. Impractical for most applications but useful as a benchmark.
Test Functions
The notebook provides three test functions with known optima:
| Function | Type | Global Minimum | Location |
|---|---|---|---|
Quadratic |
Unimodal (one valley) |
0.0 |
x = 3 |
Rastrigin |
Multi-modal (many valleys) |
0.0 |
x = 0 |
Schwefel |
Highly multi-modal |
≈ 0.0 |
x ≈ 420.97 |
Run all three cooling schedules on all three functions and record the results in the provided comparison table.
Part 3: SA vs. Hill Climbing (10 points)
Compare SA against standard hill climbing on the multi-modal Rastrigin and Schwefel functions.
Comparison Protocol:
-
Run hill climbing 20 times from random starting points
-
Run simulated annealing 20 times with exponential cooling (T₀=100, α=0.95)
-
Collect: mean final energy, standard deviation, best value found
-
Create a side-by-side bar chart comparing results on both functions
-
Write a 3–5 sentence analysis explaining the observed differences
Expected Results Pattern:
-
Quadratic function: Hill climbing and SA perform similarly (unimodal — no local optima to get stuck in)
-
Rastrigin function: SA finds significantly better solutions (~80% improvement typical)
-
Schwefel function: SA shows dramatic improvement over hill climbing (~86% improvement typical)
Part 4: Parameter Analysis (5 points)
Choose one of the following investigations:
Option A: Temperature Analysis
Run SA on the Rastrigin function with initial temperatures T₀ ∈ {1, 10, 50, 100, 500, 1000}. Hold α = 0.95 and max_iterations = 5,000 constant. Plot final solution quality versus initial temperature. What is the optimal temperature range?
Option B: Cooling Rate Analysis
Run SA on the Rastrigin function with cooling rates α ∈ {0.80, 0.90, 0.95, 0.99, 0.999}. Hold T₀ = 100 and max_iterations = 5,000 constant. Plot final solution quality versus cooling rate. What is the trade-off between exploration time and solution quality?
Write a brief markdown cell (3–5 sentences) explaining your findings and recommending parameter settings for a practitioner.
Grading Rubric
| Component | Points |
|---|---|
|
5 |
|
5 |
|
10 |
|
5 |
|
5 |
|
5 |
SA vs. hill climbing comparison with analysis |
10 |
Parameter analysis (temperature OR cooling rate) with plot |
5 |
Total |
50 |
Submission Instructions
Submitting Your Lab:
-
Complete all TODO sections in the notebook
-
Test your implementation on all three objective functions
-
Ensure all plots render (run all cells before submitting)
-
Write your analysis in the provided markdown cells
-
Download: File → Download → Download .ipynb
-
Submit the
.ipynbfile to Brightspace under Week 4: Simulated Annealing Lab
Optional: Review the complete solution with detailed explanations in the SA Lab Walkthrough. The walkthrough is available after the submission deadline.
Lab code adapted from aima-python, MIT License, Copyright 2016 aima-python contributors.
This work is licensed under CC BY-SA 4.0.