Unit 2 Overview: Intelligent Agents

Unit 2: Intelligent Agents — Overview

Last week you explored what AI is, traced its history, and examined its ethical implications. This week, you encounter the fundamental design pattern that underlies every AI system you will study in this course: the intelligent agent.

Connection to Unit 1: In Unit 1 you learned that AI systems reason and act in the world. Unit 2 gives you the precise vocabulary to describe how they do that — through sensors, actuators, and agent functions. By the end of this unit, you will be able to analyze any AI system you encounter using a single, powerful framework.

What You Will Learn This Week

By the end of Unit 2, you should be able to:

  1. Define what an intelligent agent is and identify its key components (percepts, actions, sensors, actuators, agent function)

  2. Apply the PEAS framework to describe any agent design problem

  3. Categorize environments by their properties (observable, deterministic, episodic, static, discrete, single/multi-agent)

  4. Distinguish between different agent architectures (simple reflex, model-based, goal-based, utility-based) and understand when each is appropriate

  5. Explain how learning agents improve their performance over time

  6. Implement a basic intelligent agent in Python using a grid-world simulation

Reading Assignments

Primary Reading:

UC Berkeley CS 188 Online Textbook — Agents and Environments

Covers agent components, PEAS, environment classification, and agent architectures. Licensed CC BY-SA 4.0.

Poole & Mackworth: Artificial Intelligence

Alternative treatment of agents and rationality from another open textbook. Licensed CC BY-SA 4.0.

Why the Agent Perspective Matters

Whether it is a self-driving car navigating traffic, a chess program choosing moves, or a chatbot responding to questions — all AI systems can be understood as agents that perceive their environment and take actions to achieve goals.

The agent perspective provides a powerful analytical lens:

  • Self-driving cars are agents that perceive roads (sensors: cameras, LIDAR) and take actions (actuators: steering, braking)

  • Recommendation systems are agents that perceive your viewing history and take actions (suggesting content)

  • Spam filters are agents that perceive email content and take actions (routing to inbox or spam folder)

  • Game-playing AIs are agents that perceive game states and select moves

Understanding this framework lets you design AI systems systematically, choose the right architecture for a given problem, and evaluate whether a system is behaving rationally given its goals.

Every AI system in this course — from simple search algorithms to learning neural networks — can be analyzed as an agent operating in an environment. The agent framework is your universal lens for understanding AI.

Weekly Schedule

Section Topic Est. Time

2.1

Agents and Environments — percepts, sensors, actuators, agent function

30 min

2.2

The PEAS Framework — specifying agent design problems

30 min

2.3

Classifying Environments — six dimensions of environment complexity

35 min

2.4

Agent Architectures — from simple reflex to utility-based

40 min

2.5

Learning Agents — how agents improve through experience

30 min

Lab

Agent Programming Lab — build agents in a grid-world simulation

2–3 hrs

Wrap-Up

Self-assessment, glossary, connection to Unit 3

20 min

Before diving in, think about your smartphone. In what ways does it act as an intelligent agent? What does it perceive? What actions does it take? What are its goals?

Keep this example in mind as you read through the sections — you will revisit it in the PEAS framework activity.

Key Concept Preview

This unit introduces two tools you will use throughout the course:

PEAS framework (Performance, Environment, Actuators, Sensors)

A checklist that ensures you have thought through all essential aspects of agent design before writing a line of code.

Environment taxonomy

Six binary dimensions — observable, deterministic, episodic, static, discrete, single-agent — that determine which agent architecture is most appropriate. A chess-playing agent faces a fundamentally different challenge than a self-driving car, and the taxonomy explains exactly why.


Based on the UC Berkeley CS 188 Online Textbook by Nikhil Sharma, Josh Hug, Jacky Liang, and Henry Zhu, licensed under CC BY-SA 4.0.

This work is licensed under CC BY-SA 4.0.