Unit 7 Overview: Probability and Uncertainty in AI
Unit 7: Probability and Uncertainty in AI — Overview
Until this point in the course, we have studied AI systems that operate in worlds where facts are either true or false. Search algorithms find definite paths; logic agents prove definite conclusions. But real-world intelligence is messier than that. A doctor examining a patient cannot be certain which disease is present. A spam filter cannot know for certain whether an email is unwanted. A self-driving car cannot know for certain where every pedestrian will step next. This unit equips you with the mathematical language AI uses to reason sensibly in the face of uncertainty.
Get a big-picture introduction to probability before diving in.
Learning Objectives
By the end of this unit, you will be able to:
-
Explain why real-world AI applications require probabilistic reasoning rather than purely logical reasoning.
-
Calculate basic probabilities including conditional probability and apply the definition P(A|B) = P(A ∧ B) / P(B).
-
Identify the key differences between Boolean (logic-based) and probabilistic reasoning.
-
Apply Bayes' theorem to update beliefs when new evidence is observed.
-
Describe the structure and purpose of a Bayesian network (directed acyclic graph with conditional probability tables).
-
Explain the naive independence assumption and use it to classify text with a naive Bayes classifier.
-
Implement a basic naive Bayes spam classifier in Python using the UCI SMS Spam Collection dataset.
Reading Assignments
Before each section, complete the corresponding reading:
- 7.1 — Why Uncertainty?
-
Berkeley CS 188 Textbook: Probability — Introduction section
- 7.2 — Probability Fundamentals
- 7.3 — From Logic to Probability
-
Berkeley CS 188 Textbook: Probability — Probabilistic Models section
- 7.4 — Bayesian Reasoning
-
Berkeley CS 188 Textbook: Bayes' Nets — Bayes' Theorem section
- 7.5 — Probabilistic Models
-
Berkeley CS 188 Textbook: Bayes' Nets — Bayesian Networks and Naive Bayes sections
Weekly Schedule
| Day | Activity | Resource |
|---|---|---|
Mon—Tue |
Read 7.1 and 7.2; watch Introduction to Probability video |
Sections 7.1—7.2 |
Wed |
Read 7.3 and 7.4; work through Bayes' theorem examples |
Sections 7.3—7.4 |
Thu |
Read 7.5; watch all three probabilistic models videos |
Section 7.5 |
Fri |
Complete Spam Classifier Lab (Jupyter notebook) |
Lab 7.L |
Weekend |
Wrap-up review and self-assessment quiz |
Section 7.W |
Connection to Previous Units
In Unit 5 you learned propositional logic — a system where statements are either true or false. In Unit 6 you saw how knowledge-based agents use logical inference to act in their environment. This unit extends that foundation by asking: what happens when the agent cannot be certain whether a proposition is true? The answer is probability — the mathematics of partial belief. You will see "Logic vs. Probability" comparisons throughout this unit. When you encounter them, think back to the inference rules from Unit 6 and notice how probability generalizes them.
Why This Unit Matters
Probabilistic reasoning is the engine behind some of the most impactful AI systems deployed today. Every time Gmail routes a suspicious message to your spam folder, a Bayesian classifier is making a probabilistic judgment. Every time a physician decision-support system ranks diagnoses by likelihood, it is performing Bayesian inference. Every time a recommendation engine suggests a show you might enjoy, it is updating a probabilistic model of your preferences.
Understanding probability is not just a theoretical exercise — it is the prerequisite for understanding modern machine learning, which forms the topic of our final unit.
Where We Are Headed
Unit 7 teaches the language (probability) and the key theorem (Bayes'). Unit 8 will show how machines use that language to learn — adjusting their probabilistic models automatically from large amounts of data. Probability and machine learning are inseparable.
This work is licensed under CC BY-SA 4.0.