Unit 6 Wrap-Up and Self-Assessment
Unit 6: Knowledge-Based Agents and Inference — Wrap-Up
Knowledge-based agents separate what an agent knows from how it reasons. A knowledge base stores domain facts and if-then rules. An inference engine — operating entirely independently of the domain — applies inference rules (principally Modus Ponens) to derive new facts. Forward chaining expands the KB until saturated; backward chaining focuses on proving a specific goal. Horn clauses make both strategies run in polynomial time, and the STRIPS assumption tames the combinatorial explosion of the frame problem. Expert systems bring all of this together in a practical architecture used in medicine, engineering, finance, and beyond.
Key Takeaways
6.1 Knowledge-Based Agents
-
A knowledge-based agent maintains a knowledge base of facts and rules and an inference engine for reasoning.
-
The TELL operation adds new sentences to the KB; the ASK operation queries what can be derived.
-
ASK goes beyond simple lookup — it uses inference to derive facts not explicitly stored.
-
Declarative knowledge (stating what is true) is more flexible than procedural code (stating how to compute).
6.2 Inference Rules
-
Modus Ponens: from
P → QandP, deriveQ. The foundational rule of forward/backward chaining. -
And-Elimination: from
P ∧ Q, derivePandQseparately. -
Resolution: from
P ∨ Qand¬P ∨ R, deriveQ ∨ R. Powers automated theorem provers and Prolog. -
Avoid the classic fallacies: affirming the consequent (Q, P→Q, therefore P) and denying the antecedent (¬P, P→Q, therefore ¬Q).
6.3 Forward Chaining
-
Start with known facts; repeatedly fire rules whose conditions are satisfied; stop when no new facts appear.
-
Data-driven: the data determines which conclusions are drawn.
-
Best for reactive systems, monitoring, and situations where many conclusions are needed.
-
Sound and complete for Horn clause KBs.
6.4 Backward Chaining
-
Start with a goal; find rules that could prove it; set their conditions as subgoals; recurse.
-
Goal-driven: only explores paths relevant to the specific query.
-
Best for diagnosis, query-answering, and targeted reasoning.
-
Produces a proof tree that serves as an explanation.
-
Prolog uses backward chaining with depth-first search and backtracking.
6.5 Horn Clauses
-
A Horn clause has at most one positive literal.
-
Three types: definite clauses (rules:
A ∧ B → C), facts (P), and goal clauses (¬A ∧ ¬B). -
Horn clause inference runs in linear time — exponentially faster than general propositional logic (NP-complete).
-
Prolog, CLIPS, OPS5, and Datalog are all built on Horn clauses.
6.6 The Frame Problem
-
The challenge: in formal logic, you must state everything that remains true after an action, which is combinatorially infeasible.
-
STRIPS assumption: everything stays the same unless explicitly listed in the action’s add or delete lists.
-
Successor-state axioms: explicitly characterize when each fluent is true in the next state.
-
Situation calculus: a full first-order logic framework with situations, actions, and fluents.
6.7 Expert Systems
-
An expert system has four parts: knowledge base, inference engine, explanation facility, user interface.
-
Knowledge engineering is a five-phase process: identify problem, find experts, extract knowledge, encode rules, test and validate.
-
Confidence factors handle uncertainty without a full probabilistic model.
-
The explanation facility — tracing the reasoning chain — is what makes expert systems auditable and trustworthy.
Concept Map
Logic (Unit 5)
│
▼
Knowledge Base ──── TELL/ASK Interface ──── Inference Engine
│ │
│ ┌────────┴────────┐
│ Modus Ponens Resolution
│ │
│ ┌───────────┴───────────┐
│ Forward Chaining Backward Chaining
│ (data-driven) (goal-driven)
│ │
│ Horn Clauses ──── Efficient (linear time)
│ │
│ Expert System ──── Explanation Facility
│
▼
Frame Problem ──── STRIPS Assumption / Successor-State Axioms
Summary Table
| Concept | Key Idea | Where Used |
|---|---|---|
Knowledge Base |
Stores sentences (facts + rules) in formal language |
All knowledge-based agents |
TELL / ASK |
Add facts; query what can be derived |
Every KB interaction |
Modus Ponens |
|
Forward and backward chaining |
Resolution |
Cancel complementary literals across two clauses |
Theorem provers, Prolog |
Forward Chaining |
Facts → Rules → New Facts (repeat) |
CLIPS, production systems, monitoring |
Backward Chaining |
Goal → Rules → Subgoals (recurse) |
Prolog, diagnostic systems |
Horn Clause |
At most one positive literal; enables linear-time inference |
Prolog, CLIPS, Datalog |
Frame Problem |
What stays the same when actions occur? |
Planning, robotics |
STRIPS Assumption |
Everything unchanged unless in add/delete list |
Classical planning, game AI |
Expert System |
KB + Inference Engine + Explanation + UI |
Medicine, law, finance, tech support |
Test your understanding of the full unit with this self-assessment.
Glossary
- Knowledge Base (KB)
-
A set of sentences in a formal language representing facts and rules about the agent’s world.
- Tell
-
The operation that adds a new sentence (fact or rule) to the knowledge base.
- Ask
-
The operation that queries whether a sentence is entailed by the knowledge base, triggering inference.
- Modus Ponens
-
Inference rule: from
P → QandP, deriveQ. The foundation of both chaining strategies. - Resolution
-
Inference rule: from
P ∨ Qand¬P ∨ R, deriveQ ∨ R. Basis of automated theorem proving. - Forward Chaining
-
Data-driven inference that starts with known facts and applies rules until no new facts can be derived.
- Backward Chaining
-
Goal-driven inference that starts with a query and works backward to find supporting facts.
- Knowledge Base Saturation
-
The state after forward chaining when no more rules can fire; the KB contains all entailed facts.
- Subgoal
-
A condition that must be proven to establish a higher-level goal in backward chaining.
- Horn Clause
-
A disjunction of literals with at most one positive literal; enables efficient linear-time inference.
- Definite Clause
-
A Horn clause with exactly one positive literal; written as
conditions → conclusionin implication form. - Frame Problem
-
The challenge of representing which world properties remain unchanged after an action.
- Fluent
-
A property of the world that can change over time as a result of actions.
- STRIPS Assumption
-
Convention that everything stays the same after an action unless explicitly listed as changed.
- Successor-State Axiom
-
A logical sentence characterizing exactly when a fluent is true after an action.
- Situation Calculus
-
A first-order logic framework for reasoning about actions using situations, actions, and fluents.
- Expert System
-
A knowledge-based AI system that replicates specialist decision-making in a specific domain.
- Knowledge Engineering
-
The process of eliciting expertise from domain specialists and encoding it as logical rules.
- Explanation Facility
-
The component of an expert system that traces and displays the reasoning chain behind a conclusion.
- Confidence Factor
-
A numerical weight (0–1) expressing the strength of support a rule provides for its conclusion.
Preview of Unit 7: Probability and Uncertainty
Knowledge-based agents give us certainty: if the rules are right and the facts are true, the conclusions are provably correct. But the real world is rarely that clean. Sensors are noisy. Rules have exceptions. The same symptoms can have multiple causes with different probabilities.
Unit 7 introduces probabilistic reasoning — the tools AI uses when certainty is not available. You will see how Bayes' theorem, conditional probability, and Bayesian networks let agents make rational decisions even when knowledge is incomplete. Think of it as upgrading from a yes/no answer to a "70% likely" answer — and learning when that is actually the right approach.
Based on the UC Berkeley CS 188 Online Textbook by Nikhil Sharma, Josh Hug, Jacky Liang, and Henry Zhu, licensed under CC BY-SA 4.0.
This work is licensed under CC BY-SA 4.0.