Lab: Simulated Annealing

Unit 4: Optimization and Constraint Satisfaction — Simulated Annealing Lab (50 points)

Lab Overview

In this lab, you will implement the simulated annealing algorithm from scratch in Python, experiment with different cooling schedules, and compare SA’s performance against hill climbing on challenging multi-modal optimization problems. By the end, you will have a working SA solver and quantitative evidence for when and why it outperforms simpler local search.

Before starting, review:

You will implement and test the algorithms described in those sections.

Learning Objectives

By completing this lab, you will be able to:

  1. Implement the simulated annealing algorithm from first principles

  2. Code the Metropolis acceptance criterion correctly

  3. Implement and compare three cooling schedules: linear, exponential, and logarithmic

  4. Empirically demonstrate SA’s advantage over hill climbing on multi-modal functions

  5. Analyze how temperature and cooling rate affect solution quality

Assignment Overview

Part Focus Points

Part 1

Basic SA Implementation

20

Part 2

Cooling Schedules

15

Part 3

SA vs. Hill Climbing

10

Part 4

Parameter Analysis

5

Total

50

Setup: Open the Notebook

Download the starter notebook:

Open in Google Colab:

  1. Go to colab.research.google.com

  2. Click File → Upload notebook

  3. Upload the downloaded .ipynb file

  4. Begin coding in the TODO sections

All required libraries (numpy, matplotlib, math) are pre-installed in Colab.

The Metallurgy Analogy

Before writing any code, internalize the physical intuition behind SA.

When a blacksmith anneals metal:

  • Heating → atoms move rapidly and randomly, breaking out of suboptimal arrangements

  • Slow cooling → atoms gradually settle into a stable, low-energy crystalline structure

  • Fast cooling → atoms freeze into whatever arrangement they happen to be in (often suboptimal)

The algorithm maps directly:

  • High temperature → accept almost any neighboring state (explore broadly)

  • Cooling → accept fewer bad moves over time

  • Low temperature → behave like hill climbing (refine the current solution)

Part 1: Basic SA Implementation (20 points)

Function 1: Random Neighbor (5 points)

Generate a random neighboring state by adding a small random perturbation to the current state.

def random_neighbor(current, step_size=0.1):
    """
    Generate a random neighboring state.

    Parameters:
        current (float): Current state value
        step_size (float): Maximum magnitude of change

    Returns:
        float: New neighbor state = current + random change in [-step_size, step_size]

    TODO: Implement this function using np.random.uniform
    """
    pass

The neighbor should be within step_size of the current state in either direction.

Function 2: Acceptance Probability (5 points)

Implement the Metropolis criterion.

The Metropolis Acceptance Criterion:

P(accept) = 1.0 if neighbor_energy < current_energy P(accept) = e^(−ΔE / T) if neighbor_energy ≥ current_energy

Where ΔE = neighbor_energy − current_energy (positive when neighbor is worse) and T is the current temperature.

def acceptance_probability(current_energy, neighbor_energy, temp):
    """
    Calculate probability of accepting a move.

    Always accept better solutions (return 1.0).
    Accept worse solutions with probability e^(-delta_e / temp).

    Parameters:
        current_energy (float): Objective value at current state
        neighbor_energy (float): Objective value at neighbor state
        temp (float): Current temperature (must be > 0)

    Returns:
        float: Acceptance probability in [0, 1]

    TODO: Implement using math.exp
    """
    pass

Function 3: Main SA Algorithm (10 points)

Combine neighbor generation, acceptance criterion, and cooling into a complete SA implementation.

def simulated_annealing(objective_function, initial_state,
                        initial_temp, cooling_rate, max_iterations):
    """
    Run simulated annealing to minimize objective_function.

    Algorithm:
    1. Initialize: current = initial_state, temp = initial_temp
    2. Repeat for max_iterations:
       a. Generate random neighbor
       b. Calculate acceptance probability
       c. Accept if random() < probability
       d. Update best state if improved
       e. Cool temperature: temp *= cooling_rate
       f. Enforce minimum temperature (floor at 0.01)
    3. Return best state found, its energy, and history

    Returns:
        tuple: (best_state, best_energy, energy_history)

    TODO: Implement the full algorithm
    """
    pass

Keep track of two states:

  • current: the state you are currently at (may be worse than the best found)

  • best: the best state found so far across all iterations

SA can temporarily move to worse states, so current and best can diverge. Always return best, not current.

Part 2: Cooling Schedules (15 points)

Implement three different temperature schedules. Each schedule returns the temperature at time step t.

Linear Cooling (5 points)

T(t) = T₀ × (1 − t / max_t)

Characteristics: Reaches zero temperature quickly; may terminate exploration too early on complex landscapes.

Exponential Cooling (5 points)

T(t) = T₀ × α^t

Characteristics: Most common in practice. With α = 0.95, temperature halves every ~14 steps. Choose α between 0.85 and 0.99 depending on how much exploration the problem requires.

Logarithmic Cooling (5 points)

T(t) = T₀ / log(1 + t)

Characteristics: Very slow; theoretically guarantees convergence to the global optimum as t → ∞. Impractical for most applications but useful as a benchmark.

Test Functions

The notebook provides three test functions with known optima:

Function Type Global Minimum Location

Quadratic

Unimodal (one valley)

0.0

x = 3

Rastrigin

Multi-modal (many valleys)

0.0

x = 0

Schwefel

Highly multi-modal

≈ 0.0

x ≈ 420.97

Run all three cooling schedules on all three functions and record the results in the provided comparison table.

Part 3: SA vs. Hill Climbing (10 points)

Compare SA against standard hill climbing on the multi-modal Rastrigin and Schwefel functions.

Comparison Protocol:

  1. Run hill climbing 20 times from random starting points

  2. Run simulated annealing 20 times with exponential cooling (T₀=100, α=0.95)

  3. Collect: mean final energy, standard deviation, best value found

  4. Create a side-by-side bar chart comparing results on both functions

  5. Write a 3–5 sentence analysis explaining the observed differences

Expected Results Pattern:

  • Quadratic function: Hill climbing and SA perform similarly (unimodal — no local optima to get stuck in)

  • Rastrigin function: SA finds significantly better solutions (~80% improvement typical)

  • Schwefel function: SA shows dramatic improvement over hill climbing (~86% improvement typical)

Part 4: Parameter Analysis (5 points)

Choose one of the following investigations:

Option A: Temperature Analysis

Run SA on the Rastrigin function with initial temperatures T₀ ∈ {1, 10, 50, 100, 500, 1000}. Hold α = 0.95 and max_iterations = 5,000 constant. Plot final solution quality versus initial temperature. What is the optimal temperature range?

Option B: Cooling Rate Analysis

Run SA on the Rastrigin function with cooling rates α ∈ {0.80, 0.90, 0.95, 0.99, 0.999}. Hold T₀ = 100 and max_iterations = 5,000 constant. Plot final solution quality versus cooling rate. What is the trade-off between exploration time and solution quality?

Write a brief markdown cell (3–5 sentences) explaining your findings and recommending parameter settings for a practitioner.

Grading Rubric

Component Points

random_neighbor() — correct random perturbation

5

acceptance_probability() — correct Metropolis criterion

5

simulated_annealing() — complete and correct algorithm

10

linear_cooling() — correct formula

5

exponential_cooling() — correct formula

5

logarithmic_cooling() — correct formula

5

SA vs. hill climbing comparison with analysis

10

Parameter analysis (temperature OR cooling rate) with plot

5

Total

50

Submission Instructions

Submitting Your Lab:

  1. Complete all TODO sections in the notebook

  2. Test your implementation on all three objective functions

  3. Ensure all plots render (run all cells before submitting)

  4. Write your analysis in the provided markdown cells

  5. Download: File → Download → Download .ipynb

  6. Submit the .ipynb file to Brightspace under Week 4: Simulated Annealing Lab

Optional: Review the complete solution with detailed explanations in the SA Lab Walkthrough. The walkthrough is available after the submission deadline.


Lab code adapted from aima-python, MIT License, Copyright 2016 aima-python contributors.

This work is licensed under CC BY-SA 4.0.