A Brief History of Logic in AI

Unit 5: Introduction to Logic in AI — Section 5.4

The tools you used in Sections 5.2 and 5.3 — propositions, connectives, truth tables, inference rules — did not appear overnight. They are the product of roughly 2,300 years of intellectual work, from a Greek philosopher cataloguing argument patterns to a computer program proving theorems in 1956. Understanding that history helps you see why logic matters for AI and where it is headed.

350 BCE: Aristotle and the Syllogism

The first formal system of deductive reasoning is credited to the ancient Greek philosopher Aristotle (384—​322 BCE). In a collection of works known as the Organon, Aristotle identified a class of arguments he called syllogisms: structured three-statement arguments where two premises guarantee a conclusion.

A syllogism:

Premise 1: All humans are mortal.

Premise 2: Socrates is a human.

Conclusion: Therefore, Socrates is mortal.

Aristotle’s key insight was that the truth of the conclusion depends only on the form of the argument, not on the specific content. The same pattern works for any substitution:

  • All mammals are warm-blooded. Dolphins are mammals. Therefore, dolphins are warm-blooded.

  • All prime numbers greater than 2 are odd. 7 is a prime number greater than 2. Therefore, 7 is odd.

This separation of form from content — the idea that an argument can be valid regardless of its subject matter — is the defining idea of formal logic.

Syllogism

A deductive argument with two premises and a conclusion, in which the conclusion follows necessarily from the premises by virtue of the argument’s logical form. Aristotle catalogued valid syllogistic forms in his Prior Analytics (c. 350 BCE), the first systematic treatment of formal inference.

Aristotle (384—​322 BCE)

Greek philosopher and polymath, student of Plato. His logical works (the Organon) established formal deductive reasoning as a discipline and remained the dominant logical framework for nearly 2,000 years. His syllogistic logic was the first attempt to reduce valid argument to a set of mechanical rules.

1847: Boole and the Algebra of Logic

British mathematician George Boole (1815—​1864) made a decisive move: he showed that logical reasoning could be expressed using ordinary algebra. In his 1847 work The Mathematical Analysis of Logic, and more fully in An Investigation of the Laws of Thought (1854), Boole created what we now call Boolean algebra.

Boolean Algebra

A mathematical system developed by George Boole in which logical values (true and false) are treated as numbers (1 and 0) and logical operations (AND, OR, NOT) are treated as algebraic operations. Boolean algebra forms the mathematical basis of digital circuit design and propositional logic.

Boole’s system used three operations:

  • ∧ (AND, multiplication): both must be 1 to produce 1

  • ∨ (OR, addition): at least one must be 1 to produce 1

  • ¬ (NOT, complement): flips 1 to 0 and 0 to 1

These are exactly the connectives you learned in Section 5.2. Boole’s contribution was showing they form a consistent algebraic system with its own laws (commutativity, distributivity, and the rest of the equivalence table from Section 5.3).

George Boole (1815—​1864)

British mathematician and logician. Largely self-taught, he became the first professor of mathematics at Queen’s College Cork. His Boolean algebra was largely overlooked by philosophers and mathematicians during his lifetime — it would take another 80 years for engineers to realize it was exactly what they needed to design switching circuits and, ultimately, computers.

Claude Shannon’s 1937 master’s thesis "A Symbolic Analysis of Relay and Switching Circuits" recognized that Boolean algebra could describe the behavior of electrical relays. This connection — Boolean logic to circuit design — is the mathematical foundation of every computer ever built. Shannon called it "possibly the most important master’s thesis of the 20th century," and historians largely agree.

1879—​1903: Frege and Predicate Logic

Boole’s algebra handled propositions but could not express general statements about objects and their relationships. German logician Gottlob Frege (1848—​1925) solved this in his 1879 work Begriffsschrift (roughly "concept notation") by inventing predicate logic — what we now call first-order logic.

Predicate Logic (First-Order Logic)

A logical system, developed by Gottlob Frege, that extends propositional logic with predicates (properties and relations), constants (specific objects), variables (arbitrary objects), and quantifiers (∀ "for all" and ∃ "there exists"). First-order logic can express general statements about all or some members of a domain, which propositional logic cannot.

Frege’s Grundgesetze der Arithmetik (Basic Laws of Arithmetic, 1893—​1903) attempted to derive all of mathematics from logical axioms. The project ended catastrophically: just as the second volume was going to press, Bertrand Russell sent Frege a letter showing that one of his axioms led to a contradiction (now known as Russell’s Paradox). But Frege’s logical language survived, and it remains the standard formalism for mathematics and AI knowledge representation.

1900—​1910: The Foundations Crisis

The discovery of Russell’s Paradox and related contradictions shook mathematicians' confidence in the foundations of mathematics. Bertrand Russell and Alfred North Whitehead spent a decade writing Principia Mathematica (1910—​1913), an attempt to rebuild mathematics on secure logical foundations. It was this three-volume work that the Logic Theorist would later mine for theorems to prove.

1956: The Logic Theorist — The First AI Program

On August 9, 1956, Allen Newell and Herbert Simon demonstrated the Logic Theorist at the Dartmouth Conference — the event that gave the field of artificial intelligence its name.

Logic Theorist

A computer program written by Allen Newell, Herbert Simon, and Cliff Shaw in 1955—​1956, generally considered the first artificial intelligence program. The Logic Theorist proved 38 of the first 52 theorems from Whitehead and Russell’s Principia Mathematica using heuristic search guided by logical inference rules. One of its proofs was more elegant than the original human proof.

The Logic Theorist worked by representing each theorem as a propositional formula and searching for a proof using stored inference rules. Its approach was not exhaustive brute-force search — Simon and Newell designed it to use heuristics (rule-of-thumb strategies) to focus attention on promising directions. This combination of logic and heuristic search prefigures the architecture of modern AI systems.

Allen Newell (1927—​1992) and Herbert A. Simon (1916—​2001)

Cognitive scientists and computer scientists who built the Logic Theorist and its successor, the General Problem Solver. Simon won the Nobel Prize in Economics in 1978 for his theory of bounded rationality. Both Newell and Simon received the Turing Award in 1975. Their work established the heuristic search paradigm that dominated AI research for three decades.

1958: McCarthy and the Symbol-Manipulation Vision

John McCarthy (1927—​2011) coined the term "artificial intelligence" at the 1956 Dartmouth Conference and spent the following decade building the infrastructure for logical AI. In 1958, he designed LISP — a programming language built around symbolic expressions and recursive list processing, tailored for AI programs that manipulate logical symbols.

McCarthy’s broader vision was an AI based entirely on formal logic: a system that stored knowledge as first-order logic sentences, reasoned using inference rules, and could explain every conclusion by producing a formal proof. This vision directly inspired the knowledge-based agent architecture you will study in Unit 6.

Key dates in logical AI:

  • 350 BCE — Aristotle’s Organon: the first formal logic

  • 1847 — Boole’s Mathematical Analysis of Logic: algebraic logic

  • 1854 — Boole’s Laws of Thought: Boolean algebra fully formalized

  • 1879 — Frege’s Begriffsschrift: predicate (first-order) logic

  • 1910—​1913 — Whitehead & Russell’s Principia Mathematica

  • 1937 — Shannon: Boolean algebra describes switching circuits

  • 1956 — Logic Theorist demonstrated at Dartmouth Conference

  • 1958 — McCarthy designs LISP

  • 1965 — DENDRAL: first expert system (molecular structure identification)

  • 1970s — MYCIN: medical diagnosis expert system, 69% accuracy on bacterial infections

  • 1980s — XCON: logic-based system configures VAX computers; saves DEC millions annually

  • 1990s—​present — Formal verification enters hardware and software engineering

The Expert Systems Era: 1965—​1990

The Logic Theorist proved logic could reason, but its theorems were abstract. The 1960s—​1980s saw logic applied to real problems through expert systems: programs that encoded domain expertise as hundreds or thousands of logical rules.

MYCIN rule (1970s):

IF  organism-stain IS gram-positive
AND organism-morphology IS coccus
AND organism-growth-conformation IS chains
THEN organism IS streptococcus (confidence 0.7)

MYCIN (Stanford, 1970s) applied rules like this to bacterial infection diagnosis and achieved 69% accuracy — comparable to or better than junior physicians. Its rule base captured the knowledge of infectious disease specialists in a form a program could use.

XCON (Digital Equipment Corporation, 1980s):

XCON (eXpert CONfigurer) automatically configured VAX computer systems for customer orders. It contained approximately 2,500 rules about hardware compatibility and configuration constraints. By 1986, XCON processed 80,000 orders per year and saved DEC an estimated $40 million annually. XCON’s success triggered the "expert systems boom" of the 1980s and showed that logical AI had practical economic value.

Modern Logical AI: Verification and Hybrid Systems

Expert systems declined in the 1990s when their brittleness became apparent — they worked well within their domain but failed completely outside it, and maintaining thousands of rules as the world changed was expensive.

But logic did not disappear. It found new homes.

Formal verification applies model checking and theorem proving to verify that software and hardware behave correctly for all possible inputs. Tools like SPIN, Coq, and Isabelle are used to verify operating systems, cryptographic protocols, and safety-critical controllers.

Knowledge graphs — used by Google, Amazon, and others — store billions of facts as logical relationships and use inference to answer queries. When you search "Who is Barack Obama?" and Google returns a structured summary rather than just links, it used a knowledge graph and logical inference.

Satisfiability solving (SAT and SMT solvers) has advanced to the point where formulas with millions of variables can be checked efficiently. Modern chip design, AI planning, and software testing rely on SAT solvers.

Hybrid systems combine neural perception with logical reasoning. A vision system identifies objects; a logical reasoner plans actions based on those objects. This architecture often appears under the label "neuro-symbolic AI."

Aristotle’s syllogism and a modern SAT solver are both formal systems for checking whether conclusions follow from premises.

What changed between 350 BCE and today? Identify at least three technical advances from the timeline that made automated reasoning at scale possible.

Then consider: are there reasoning tasks that you think logic will never be able to handle? What makes those tasks different from the ones logic handles well?

Match key figures and events to their contributions to logical AI.


Based on the UC Berkeley CS 188 Online Textbook by Nikhil Sharma, Josh Hug, Jacky Liang, and Henry Zhu, licensed under CC BY-SA 4.0.

This work is licensed under CC BY-SA 4.0.