Why Uncertainty? The Limits of Logic
Unit 7: Probability and Uncertainty in AI — Section 7.1
Imagine you are an AI assistant helping a doctor diagnose a patient. The patient has a fever, a cough, and muscle aches. You have a knowledge base packed with medical rules. But every rule you consult says something frustrating: "could be influenza, could be COVID-19, could be strep, could be a dozen other things." A logical system requires certainty — you either conclude "the patient has disease X" or you cannot conclude anything at all. Real medicine never has that luxury. This is why AI needs probability.
The World Is Not a Logic Puzzle
In earlier units we worked with agents that reason by deduction. Given premises that are known to be true, logic produces conclusions that are guaranteed to be true. This works beautifully for puzzles, route planning, and formal theorem proving. It breaks down almost everywhere else.
Real environments impose three distinct sources of uncertainty on any reasoning agent:
Laziness — Even when complete information exists in principle, listing every rule and every exception is impractical. A comprehensive medical knowledge base would require millions of rules covering every possible combination of symptoms, patient history, lab values, and comorbidities. No one has written those rules, and even if they existed, the computational cost of applying them all would be prohibitive.
Theoretical ignorance — Sometimes science genuinely does not know the exact causal mechanism behind a phenomenon. We know that smoking increases the probability of lung cancer, but we cannot write a deterministic rule saying "if a person smokes, they will develop cancer," because that is simply not how the biology works.
Practical ignorance — Even when the causal mechanism is understood, we rarely have access to all the relevant information. A doctor cannot run every possible test on every patient. A robot navigating a warehouse cannot see around corners. An email filter cannot read the sender’s mind. Decisions must be made with incomplete data.
- Uncertainty
-
The condition in which an agent lacks complete information about the state of the world, the consequences of actions, or both. In AI, uncertainty arises from incomplete observation, noisy sensors, stochastic environments, or the practical impossibility of modeling every causal factor.
Logic’s Limits: A Concrete Comparison
Consider a medical scenario. A patient presents with fever and cough. You have the following medical rules in a logic-based knowledge base:
IF fever AND cough THEN influenza.
IF fever AND cough THEN COVID-19.
IF fever AND cough THEN pneumonia.
A pure logic system immediately runs into trouble: all three conclusions follow from the same evidence. Logic cannot tell you which conclusion to act on. You might add more specific rules — "IF fever AND cough AND body aches THEN influenza" — but symptom overlap is unavoidable, and every added rule reveals another exception.
More fundamentally, logic represents knowledge as binary: a proposition is either true or false. But the real world is not binary. "It will rain tomorrow" is not simply true or false — it is more or less likely depending on current atmospheric conditions. "This email is spam" is not simply true or false — it is more or less probable given the words it contains.
Logic asks: Is this true? Probability asks: How likely is this?
The shift from the first question to the second is the fundamental insight that enables modern AI to operate in realistic, messy, uncertain environments.
Noisy Sensors and Imperfect Observations
Even when a robot’s environment is fully deterministic, the robot’s perception of that environment is not. Sensors have noise. A temperature sensor reports a value within some error range. A camera misclassifies pixels under poor lighting. A microphone picks up background noise. A GPS unit drifts.
Logical reasoning has no natural way to handle sensor noise. If the sensor reports "temperature = 37.3°C," logical reasoning treats that as a fact. But if the true temperature might be anywhere from 36.8°C to 37.8°C, the "fact" is really a probability distribution.
Probabilistic reasoning handles this naturally. Instead of asserting "temperature = 37.3°C," the agent maintains a belief: "the true temperature is most likely around 37°C, with decreasing probability as we move further from that value." Every subsequent sensor reading updates that belief rather than overwriting it.
- Noisy Sensor
-
A sensor whose readings deviate from the true value of the quantity being measured due to measurement error, interference, or physical limitations. Probabilistic models represent sensor noise as a probability distribution over possible readings given the true state.
Unpredictable Environments and Stochastic Actions
A further complication is that many environments are inherently stochastic — even perfectly executed actions have uncertain outcomes.
A robot arm instructed to pick up a cup might succeed 95% of the time and drop it 5% of the time, not because of poor programming, but because of natural variation in friction, object orientation, and motor timing. A medication administered to a patient has a probability of curing the disease, a probability of partial effect, and a probability of no effect — and those probabilities depend on variables that may not be observable.
Stochastic Action: The Unreliable Vacuum
Recall the vacuum world agent from Unit 2. In the original formulation, every action had a deterministic outcome: the vacuum either cleaned the square or it did not.
Now suppose the vacuum’s suction motor is worn down. When it attempts to clean a dirty square, it succeeds only 80% of the time. If you run it twice, the probability that the square is now clean is:
P(clean after 2 attempts) = 1 − P(still dirty after 2 attempts) = 1 − (0.20 × 0.20) = 1 − 0.04 = 0.96
A logical agent has no framework for this calculation. A probabilistic agent can plan with it.
Degrees of Belief: The Core Intuition
The solution to all three sources of uncertainty is to replace binary truth values with degrees of belief.
Instead of asserting "it will rain tomorrow = TRUE," a probabilistic agent maintains "P(rain tomorrow) = 0.65." Instead of concluding "this email is spam = TRUE," the agent computes "P(spam | observed words) = 0.92." Instead of claiming "the robot is at position (3, 4) = TRUE," the agent tracks a distribution over all possible positions.
A probability of 0 corresponds to impossibility — the equivalent of logical falsehood. A probability of 1 corresponds to certainty — the equivalent of logical truth. Every value in between represents a degree of belief, encoding both what the agent thinks is most likely and how confident it is.
- Degree of Belief
-
A numerical value between 0 and 1 representing an agent’s confidence that a proposition is true. A degree of belief of 0.9 means the agent considers the proposition highly likely; 0.1 means highly unlikely. Probability theory provides the axioms that ensure degrees of belief remain internally consistent.
- Probabilistic Reasoning
-
A form of reasoning in which conclusions are not guaranteed true or false but are expressed as probabilities. An agent that reasons probabilistically updates its degrees of belief as new evidence arrives, using rules such as Bayes' theorem.
Why Probability — and Not Something Else?
Students sometimes ask whether probability is the only way to handle uncertainty. Could we use fuzzy logic, confidence intervals, or just qualitative labels like "likely" and "unlikely"?
Probability has two decisive advantages. First, it has a rigorous mathematical foundation (the Kolmogorov axioms) that ensures consistency: your beliefs cannot simultaneously imply contradictory conclusions. Second, it comes with a principled update rule — Bayes' theorem — that tells you exactly how to revise a belief when new evidence arrives. This makes probability not just a description of uncertainty but an engine for rational action.
Consider the following three AI problems. For each one, identify the main source of uncertainty and explain why logic alone would be insufficient:
-
A spam filter classifying incoming email
-
A self-driving car deciding whether a blurry shape on the road is a pedestrian or a shadow
-
A chess-playing AI (note: chess has perfect, observable state — why does this case not require probability?)
Thinking through these contrasts will sharpen your sense of when probabilistic reasoning is essential and when logic suffices.
Where We Are Headed
Now that you understand why probability matters, the next section grounds that intuition in mathematics. Section 7.2 introduces sample spaces, events, conditional probability, and independence — the formal tools you need to reason quantitatively about uncertain situations.
Real-world AI operates in environments that are partially observable, noisy, and stochastic. Logic requires complete, certain knowledge. Probability provides a principled way to reason and make good decisions even when that certainty is unavailable. The question shifts from "Is it true?" to "How likely is it?" — and that shift unlocks an enormous range of practical AI applications.
Test your understanding of why AI systems need probabilistic reasoning.
Based on the UC Berkeley CS 188 Online Textbook by Nikhil Sharma, Josh Hug, Jacky Liang, and Henry Zhu, licensed under CC BY-SA 4.0.
This work is licensed under CC BY-SA 4.0.