Unit 1 Wrap-Up and Self-Assessment
Unit 1: Foundations of Artificial Intelligence — Wrap-Up
You have covered a lot this week. Let’s consolidate what you have learned and prepare for the unit assessment.
Artificial intelligence is not magic and not science fiction — it is a rigorous engineering discipline built on contributions from philosophy, mathematics, economics, neuroscience, psychology, computer science, and linguistics.
Current AI systems are narrow: extraordinarily capable within their trained domains and severely limited outside them. The potential benefits — improved healthcare, safer transportation, more accessible expertise — are real. So are the risks: algorithmic bias, privacy erosion, job displacement, and concentration of power.
Building AI that is accurate, fair, explainable, and accountable is not just an ethical aspiration. It is what it means to build AI that works.
Key Takeaways
What is AI?
-
AI researchers have used four lenses: thinking humanly (cognitive modeling), thinking rationally (laws of thought), acting humanly (Turing Test), and acting rationally (rational agents).
-
The rational agent perspective — designing systems that perceive and act to achieve the best expected outcome — is the dominant modern paradigm.
-
AI is a multidisciplinary field drawing on philosophy, mathematics, economics, neuroscience, psychology, computer engineering, and linguistics.
AI’s history
-
The field was formally born at the Dartmouth Conference in 1956, where John McCarthy coined the term "artificial intelligence."
-
A recurring cycle: early successes generate overoptimistic predictions; fundamental limitations become apparent; funding declines (AI winter); patient progress eventually unlocks new capability.
-
Modern AI is driven by deep learning, big data, and GPU computing. We are currently in an AI spring with rapid progress.
Current capabilities and limits
-
AI achieves superhuman performance on specific narrow tasks: game playing (Chess, Go, Poker), image recognition, medical imaging, protein structure prediction, language generation.
-
AI still struggles with common-sense reasoning, few-shot generalization, causal understanding, explainability, and physical manipulation in unstructured environments.
-
Everything deployed today is narrow AI. Artificial general intelligence does not exist and has no clear technical path.
Real-world deployment and ethics
-
AI is deployed across healthcare, transportation, finance, law, education, and creative arts.
-
The NIST AI Risk Management Framework defines trustworthy AI as accurate, reliable, safe, secure, explainable, privacy-enhanced, fair, and accountable.
-
Algorithmic bias is well-documented: facial recognition, criminal justice scoring, and hiring tools have all shown differential performance across demographic groups.
-
The EU AI Act (2024) is the world’s first comprehensive AI law, taking a risk-based approach that prohibits certain uses and imposes strict requirements on high-risk applications.
Summary Table
| Concept | Definition | First Covered |
|---|---|---|
Turing Test |
Behavioral test for machine intelligence (Turing, 1950): a human evaluator cannot distinguish machine from human in text conversation |
Section 1.1 |
Rational Agent |
An entity that perceives its environment and acts to maximize expected performance according to a goal |
Section 1.1 |
Dartmouth Conference |
1956 summer workshop that coined "artificial intelligence" and formally established the field |
Section 1.2 |
AI Winter |
Period of reduced funding following overoptimistic predictions and disappointing results |
Section 1.2 |
Expert System |
AI using a knowledge base of human-expert rules and an inference engine to solve domain-specific problems |
Section 1.2 |
Deep Learning |
Multi-layer neural networks capable of learning hierarchical representations from data |
Sections 1.2, 1.3 |
Narrow AI (ANI) |
AI optimized for a specific task; the category all current AI falls into |
Section 1.3 |
Artificial General Intelligence (AGI) |
Hypothetical AI with human-level performance across all intellectual domains; does not exist |
Section 1.3 |
Superintelligence |
Hypothetical AI surpassing human cognitive performance in all domains; purely speculative |
Section 1.3 |
NIST AI RMF |
Voluntary U.S. framework for trustworthy AI: Govern, Map, Measure, Manage |
Section 1.4 |
Algorithmic Bias |
Systematic unfair discrimination by an AI system, often from non-representative training data |
Sections 1.4, 1.5 |
Proxy Variable |
Neutral-appearing variable that correlates with a protected characteristic and can introduce bias |
Section 1.5 |
Fairness (Algorithmic) |
The principle that AI outputs should be equitable across demographic groups |
Section 1.5 |
EU AI Act |
EU Regulation 2024/1689, the world’s first comprehensive AI law, using a risk-based approach |
Section 1.5 |
Risk-Based Regulation |
Regulatory framework that scales requirements according to potential harm severity |
Section 1.5 |
Trustworthy AI |
AI that is accurate, reliable, safe, secure, explainable, privacy-enhancing, fair, and accountable |
Section 1.4 |
Self-Assessment Questions
Test yourself before the unit quiz.
-
Explain the difference between "thinking humanly" and "acting rationally" approaches to AI. Give a real-world AI system as an example of each.
-
What was the Dartmouth Conference and why is it significant for AI history?
-
Why did expert systems decline despite their early successes in the 1980s?
-
Name three things current AI does exceptionally well and three things it still struggles with. For each, briefly explain why.
-
Describe one documented case of algorithmic bias. What caused it? What were the real-world harms?
-
What is the difference between narrow AI and artificial general intelligence? Why does the distinction matter?
-
What are the four EU AI Act risk tiers? Give one example of an AI system that would fall into each tier.
-
What does NIST mean by "trustworthy AI"? List at least four characteristics.
Take the Unit 1 self-check quiz to test your mastery of the core concepts before submitting your lab assignment.
Glossary: Unit 1 Key Terms
- Algorithmic Bias
-
Systematic and unfair discrimination produced by an AI system, arising from non-representative training data, biased problem formulations, or optimization criteria that perform inequitably across demographic groups.
- Artificial General Intelligence (AGI)
-
A hypothetical AI system with the ability to understand, learn, and apply knowledge across any intellectual domain at least as well as a human. No such system exists.
- Artificial Intelligence
-
The field of computer science concerned with designing agents that perceive their environment and take actions to achieve goals.
- Cognitive Modeling
-
An AI approach that attempts to replicate the internal mental processes of human cognition rather than just external behavior.
- Dartmouth Conference
-
A 1956 summer workshop at Dartmouth College where the term "artificial intelligence" was coined and the field formally established.
- Deep Learning
-
A family of machine learning methods based on neural networks with many layers, capable of learning hierarchical representations from raw data.
- ELIZA
-
An early natural language processing program (1964) that simulated a Rogerian psychotherapist by reflecting users' statements back to them.
- EU AI Act
-
European Union Regulation 2024/1689, the world’s first comprehensive AI law, which uses a risk-based approach to regulate AI systems.
- Expert System
-
An AI program that uses a knowledge base of human-expert-derived rules and an inference engine to solve problems in a specific domain.
- Fairness (Algorithmic)
-
The principle that an AI system’s outputs should be equitable across demographic groups.
- Narrow AI (Artificial Narrow Intelligence, ANI)
-
AI systems designed and optimized for a specific task or narrow domain. All currently deployed AI is narrow AI.
- NIST AI Risk Management Framework
-
A voluntary framework developed by the National Institute of Standards and Technology (2023) providing guidance for managing AI risks across the full system lifecycle.
- Proxy Variable
-
A variable that appears neutral but correlates strongly with a protected characteristic such as race, gender, or religion.
- Rational Agent
-
An entity that perceives its environment through sensors and produces actions through actuators in order to maximize its expected performance measure.
- Risk-Based Regulation
-
A regulatory approach that scales requirements according to the severity of potential harm.
- Superintelligence
-
A hypothetical AI system that surpasses human cognitive performance across all domains.
- Trustworthy AI
-
An AI system designed and operated to be accurate, reliable, safe, secure, explainable, privacy-preserving, fair, and accountable.
- Turing Test
-
A behavioral test of machine intelligence proposed by Alan Turing (1950): if a human evaluator cannot distinguish machine from human in text conversation, the machine is considered intelligent.
Looking ahead to Unit 2: Intelligent Agents
Unit 1 introduced the rational agent as the dominant paradigm in modern AI. Unit 2 dives into what that framework actually means in practice.
You will learn how to describe any AI system using the PEAS framework (Performance measure, Environment, Actuators, Sensors), how to classify the environments agents operate in, and how four different agent architectures — from simple reflex agents to learning agents — relate to each other.
The rational agent perspective you learned this week is the foundation on which Units 2-8 build.
Original content for CSC 114: Artificial Intelligence I, Central Piedmont Community College.
This work is licensed under CC BY-SA 4.0.