Expert System Lab

Unit 6: Knowledge-Based Agents and Inference — Lab

In this lab you will move from theory to practice by building a working expert system from scratch in Python. You will implement the tell-ask interface, a forward-chaining inference engine, a domain-specific knowledge base, and an explanation facility that traces the reasoning chain step by step.

Before starting, make sure you are comfortable with:

  • The TELL and ASK interface (Section 6.1)

  • Modus Ponens as the basis of forward chaining (Section 6.2)

  • The forward chaining algorithm — scan rules, fire those whose conditions are met, repeat until no new facts (Section 6.3)

  • Horn clauses and if-then rules (Section 6.5)

If any of these topics feel unclear, review the relevant section before opening the notebook.

Lab Objectives

By the end of this lab you will have:

  1. Implemented a Rule class storing conditions and conclusion.

  2. Built a KnowledgeBase class with working tell(), ask(), and add_rule() methods.

  3. Implemented a forward_chain() method that iteratively fires rules until no new facts can be derived.

  4. Built a domain-specific knowledge base with at least 15 rules covering a diagnosis or recommendation scenario of your choosing.

  5. Implemented an explain(conclusion) method that prints the complete reasoning chain from initial facts to conclusion.

Getting Started

Setup Instructions

  1. Download the starter notebook (see DOWNLOAD callout below).

  2. Go to colab.research.google.com.

  3. Click FileUpload notebook and select Expert_System_Lab_Starter.ipynb.

  4. Immediately save a copy to your Drive: FileSave a copy in Drive.

  5. Work through each Part in order, running each cell as you complete it.

Starter Notebook

Expert_System_Lab_Starter.ipynb — available in Brightspace → Assignments → Week 6 Lab.

Open in Google Colab: upload the file via FileUpload notebook.

Lab Structure (100 Points)

Part 1: Knowledge Base Implementation (25 points)

Build the data structures that store facts and rules.

Requirements:

  • Create a Rule class with conditions (list) and conclusion (string).

  • Create a KnowledgeBase class that stores facts in a set and rules in a list.

  • Implement tell(fact) — adds a fact to the KB.

  • Implement add_rule(rule) — adds a Rule object to the KB.

  • Implement ask(query) — returns True if the query is in the facts set, False otherwise.

Grading Criteria — Part 1

  • 5 pts: Rule class correctly stores conditions and conclusion

  • 10 pts: KnowledgeBase stores facts and rules; facts use a set for O(1) lookup

  • 5 pts: tell() adds facts correctly and does not duplicate

  • 5 pts: ask() queries facts correctly

Part 2: Forward Chaining Algorithm (30 points)

Implement the inference engine that derives new conclusions.

Requirements:

  • Implement forward_chain() method on KnowledgeBase.

  • The method scans all rules to find which can fire (all conditions in current facts).

  • Rules that can fire add their conclusion to the KB as a new fact.

  • The method continues iterating until one full pass produces no new facts.

  • Track fired rules in self.fired_rules list (needed for Part 4).

  • Return the set of all newly inferred facts.

Two critical correctness checks:

  1. Only fire a rule if its conclusion is NOT already in the KB (prevents infinite loops).

  2. Stop the outer loop when new_facts is empty after a full pass (termination condition).

Missing either check will cause your system to loop forever.

Grading Criteria — Part 2

  • 10 pts: Correctly identifies which rules can fire

  • 10 pts: Fires rules and adds new facts to KB

  • 5 pts: Correctly detects when no new facts can be derived and stops

  • 5 pts: Returns all inferred conclusions; tracks fired rules for explanation

Part 3: Domain-Specific Expert System (30 points)

Build a real knowledge base for a domain of your choosing.

Choose one domain:

Domain Description

Medical Diagnosis

Diagnose common illnesses (flu, cold, COVID, allergies) from symptoms

Tech Support

Diagnose computer/network problems from observable symptoms

Car Diagnosis

Identify car problems (won’t start, strange noise, warning lights) from symptoms

Your Choice

Any domain with clear observation → diagnosis patterns (get instructor approval first)

Requirements:

  • Minimum 15 rules in your knowledge base

  • Rules must chain: some conclusions feed into other rules as conditions

  • At least 5 distinct diagnoses or recommendations possible

  • Test your system with at least 3 different input scenarios

Minimum structure for your KB (medical example):

# Stage 1: Symptom grouping rules
R1: fever ∧ cough → respiratory_issue
R2: runny_nose ∧ itchy_eyes → allergy_symptoms

# Stage 2: Diagnosis rules (use intermediate conclusions from Stage 1)
R3: respiratory_issue ∧ fatigue → possible_flu
R4: possible_flu ∧ body_aches → flu
R5: respiratory_issue ∧ sore_throat → possible_cold
R6: possible_cold ∧ runny_nose → common_cold

# Stage 3: Recommendation rules (chain off diagnoses)
R7: flu → recommend_rest
R8: flu → recommend_fluids
R9: flu → consider_antiviral
R10: common_cold → recommend_rest
...

Notice the chaining: Stage 2 rules use conclusions from Stage 1, and Stage 3 rules use conclusions from Stage 2.

Grading Criteria — Part 3

  • 10 pts: KB has 15+ well-designed rules with clear predicate names

  • 10 pts: Rules chain correctly to reach multi-step conclusions

  • 5 pts: System produces at least 5 distinct diagnoses/recommendations

  • 5 pts: At least 3 test cases demonstrate system functionality

Part 4: Explanation Facility (15 points)

Add the ability to explain how the system reached a conclusion.

Requirements:

  • Implement explain(conclusion) method on KnowledgeBase.

  • If the conclusion is not in the KB, display an appropriate "cannot explain" message.

  • If the conclusion was an initial fact, display "given as initial fact."

  • If the conclusion was derived, find the rule that concluded it, then recursively find the rules that established each of its conditions.

  • Display the complete reasoning chain in human-readable format.

Expected output for kb.explain("flu"):

=== Explanation for: flu ===

Step 1: Given facts: fever=True, cough=True
Step 2: Fired rule: fever ∧ cough → respiratory_issue
Step 3: Inferred: respiratory_issue=True
Step 4: Fired rule: respiratory_issue ∧ fatigue → possible_flu
Step 5: Inferred: possible_flu=True
Step 6: Fired rule: possible_flu ∧ body_aches → flu
Step 7: Concluded: flu=True

Grading Criteria — Part 4

  • 5 pts: Correctly identifies and tracks fired rules

  • 5 pts: Builds complete reasoning chain back to initial facts

  • 5 pts: Explanation output is clear and human-readable

Submission Requirements

Before Submitting

  1. Click RuntimeRestart and run all to verify every cell runs without errors.

  2. Confirm all three test scenarios produce output.

  3. Verify the explanation facility produces complete reasoning chains.

  4. Check that your KB has at least 15 rules.

  5. Rename the notebook: Expert_System_Lab_YourLastName.ipynb

What to submit:

  • Expert_System_Lab_YourLastName.ipynb — your completed notebook (all cells run, output visible)

  • Optional: README.txt describing your chosen domain, how to use your system, and any extra features

Where to submit: Brightspace → Assignments → Expert System Lab

Due: Sunday, 11:59 PM (end of Week 6)

Tips for Success

Common mistakes (avoid these!):

  • Forgetting to check whether a conclusion is already known before firing a rule (infinite loop)

  • Not detecting when no new facts were derived (another infinite loop)

  • Modifying self.facts during the inner rule-scan loop (use a separate new_facts set)

  • Using vague variable names like a, b, x — use fever, respiratory_issue, etc.

  • Skipping Part 4 (the explanation facility is 15 points — do not leave it out)

  • Not chaining rules (a KB with no rule whose input is another rule’s output is too flat)

Best practices:

  • Build Part 1 first and test it thoroughly before moving to Part 2

  • Add print statements to debug the forward chaining loop

  • Start your domain KB with simple, 1-condition rules, then add complexity

  • Test your KB on a case where you know the right answer before building more rules

Resources


Code examples adapted from aima-python, MIT License, Copyright 2016 aima-python contributors.

This work is licensed under CC BY-SA 4.0.