Unit 8 Overview: Machine Learning Foundations
Unit 8: Machine Learning Foundations (Capstone) — Overview
Connection to Prior Units: This is the capstone unit of CSC 114. Over the past seven weeks you have built a complete toolkit for artificial intelligence: you formulated problems as search (Unit 3), optimized solutions under constraints (Unit 4), reasoned with formal logic (Unit 5), built knowledge-based inference systems (Unit 6), and handled uncertainty with probability (Unit 7). Machine learning draws on every one of those ideas. Before starting Unit 8, make sure you are comfortable with Bayesian reasoning (7.4) and the notion of probability as a model of uncertainty — ML is fundamentally probabilistic.
What This Unit Is About
Every AI system you have studied so far required a human expert to encode knowledge — writing search heuristics, specifying logic rules, or designing probability tables. Machine learning asks a different question: what if the system could discover those rules on its own by studying examples?
Tom Mitchell’s 1997 definition remains the standard: "A computer program is said to learn from experience E with respect to some task T and performance measure P, if its performance on T, as measured by P, improves with experience E." In plain language — the system gets better at its job the more data it sees. This week you will study the foundations of that idea: how machines learn from labeled examples (supervised learning), how we build and read decision trees, how we measure whether a model actually generalizes, and how machine learning fits into the broader landscape of modern AI.
Because this is the final unit of the course, the week also serves as a synthesis: you will see how search, optimization, logic, knowledge systems, and probability all contributed to making modern machine learning possible — and where the field is going next.
Learning Objectives
By the end of this unit, you will be able to:
-
Define machine learning and distinguish it from traditional programming
-
Explain the three paradigms of ML: supervised, unsupervised, and reinforcement learning
-
Describe the supervised learning workflow from data collection through deployment
-
Construct and interpret a simple decision tree, explaining information gain and entropy
-
Summarize the k-Nearest Neighbors algorithm and when to prefer it over decision trees
-
Evaluate a classifier using accuracy, precision, recall, and F1 score
-
Explain overfitting and underfitting in terms of the bias-variance tradeoff
-
Compare symbolic AI approaches (Units 3—7) with machine learning on specific problem types
-
Identify current AI trends including large language models and generative AI
Reading Assignments
-
UC Berkeley CS 188 Online Textbook: Machine Learning — Primary reading for Sections 8.1—8.3
-
scikit-learn User Guide (BSD License) — Reference for lab algorithms
-
Montreal AI Ethics Institute: State of AI Ethics — Background for Section 8.5
Weekly Schedule
| Day | Activity | Resource |
|---|---|---|
Mon—Tue |
Sections 8.1 and 8.2: What is ML? + Supervised Learning |
Berkeley CS 188 ML chapter |
Wed |
Section 8.3: Evaluating Models |
scikit-learn docs |
Thu |
Sections 8.4 and 8.5: Connecting It All + Current Landscape |
MAIEI State of AI Ethics |
Fri—Sun |
Lab: Machine Learning Classifier (wine quality dataset) |
|
Get a head start with this introduction to machine learning.
This work is licensed under CC BY-SA 4.0.