AI in the Real World
Unit 1: Foundations of Artificial Intelligence — Section 1.4
AI is not a single technology living in one research lab. It is a field that touches nearly every sector of society, drawing on contributions from philosophy, mathematics, economics, neuroscience, psychology, computer science, and linguistics — and increasingly shaping decisions in healthcare, transportation, finance, law, and creative arts. Understanding where AI comes from intellectually, and where it is going practically, requires looking at both its multidisciplinary roots and its real-world deployment.
AI Is Multidisciplinary
AI could not exist without foundational contributions from at least eight fields. Each brought tools and questions that shaped what AI researchers build and how they evaluate it.
Philosophy
Philosophers asked the foundational questions: Can formal rules describe all thinking? How does mental knowledge connect to physical action? Where does knowledge come from?
Aristotle developed the first formal system of logical inference. Descartes' mind-body dualism raised the question of whether a purely physical machine could have a mind. The 20th-century logical positivists tried to reduce all knowledge to formal representations — a goal that directly inspired early AI knowledge representation systems.
Mathematics
Mathematics provided the formal tools AI depends on:
-
Logic: George Boole and Gottlob Frege created the formal logical systems underlying symbolic AI.
-
Computation theory: Alan Turing proved that some problems cannot be solved by any algorithm, setting fundamental limits on what AI can do.
-
Probability: Statistical methods for reasoning under uncertainty underlie modern machine learning.
-
Optimization: Finding the best solution among many possibilities is the core computational challenge in training neural networks.
- Turing Completeness
-
The property of a computational system that can simulate any Turing machine, meaning it can, in principle, compute any computable function. Turing’s foundational work also proved the existence of problems that no algorithm can solve — an important limit on AI capability.
Economics and Decision Theory
Economics contributed the framework for rational decision-making: how should an agent choose among actions when outcomes are uncertain? Expected utility theory, game theory (what happens when multiple agents interact?), and the exploration-exploitation tradeoff (trying new things vs. doing what works) all come from economics and directly inform how AI agents are designed.
Neuroscience
The brain inspired several waves of AI research. Early perceptrons (1950s) were modeled on neurons. The convolutional neural networks that power modern image recognition are loosely inspired by the mammalian visual cortex. Reinforcement learning draws on psychological theories of reward and punishment.
Neuroscience inspires AI, but AI does not need to copy the brain any more than aeronautics needs to copy birds. Airplanes do not flap their wings — yet they fly. The analogy is useful for generating ideas, not as a constraint on engineering.
Linguistics
Natural language is central to many AI applications. Noam Chomsky’s work showed that language has deep grammatical structure that cannot be fully captured by simple pattern matching. Computational linguistics bridged formal language theory with algorithms, and today’s large language models represent the apex of that tradition — though the question of whether they genuinely "understand" language remains philosophically contested.
Synthesis
AI is uniquely powerful because it synthesizes all of these disciplines into a single engineering framework. A single AI system might use logic (philosophy/mathematics) for reasoning, probability (mathematics) for uncertainty, neural networks (neuroscience/computer science) for learning, decision theory (economics) for action selection, and natural language processing (linguistics) for communication.
No other field integrates this range of intellectual traditions.
AI in Practice: Four Domains
Healthcare
Healthcare is one of AI’s most consequential application areas.
Medical imaging: Deep learning systems match or exceed specialist radiologists at detecting certain conditions from X-rays, MRIs, and pathology slides. Google Health’s AI detects diabetic retinopathy from retinal photographs with sensitivity comparable to ophthalmologists.
Drug discovery: AlphaFold solved the protein structure prediction problem that had resisted 50 years of research. Pharmaceutical companies are using AI to identify candidate drug molecules far faster than traditional lab-based approaches.
Clinical decision support: Natural language processing systems extract information from unstructured clinical notes, flagging potential drug interactions or identifying patients at risk for deterioration.
NIST AI RMF in Healthcare: The NIST AI Risk Management Framework identifies healthcare AI as a high-impact domain where AI risks must be carefully governed. A clinical decision support tool that misclassifies a malignant tumor as benign harms a specific, identifiable person. The NIST framework asks developers to assess not just whether a system is accurate on average, but whether it performs equitably across demographic subgroups and whether it degrades gracefully when inputs fall outside its training distribution.
See NIST AI RMF for the full framework.
Transportation
Autonomous vehicles represent the most visible transportation AI project. Current systems combine cameras, LIDAR, radar, and GPS with deep learning and probabilistic planning to navigate roads. Full autonomy in all conditions remains unsolved, but limited self-driving in geofenced areas is commercially deployed.
Traffic management: AI optimizes signal timing at intersections based on real-time traffic flow, reducing congestion and emissions in deployed cities.
Aviation: AI assists with flight planning, air traffic control scheduling, and predictive maintenance that identifies component failures before they cause incidents.
Finance
Fraud detection: Banks use AI to identify anomalous transaction patterns in real time, blocking fraudulent charges before they are processed.
Algorithmic trading: AI systems execute trades at millisecond timescales, responding to market signals faster than any human.
Credit scoring: ML models assess creditworthiness from financial history, potentially reducing human bias — but also potentially encoding it, if historical lending patterns were themselves discriminatory.
- Algorithmic Bias
-
Systematic and unfair discrimination produced by an AI system, often arising from training data that reflects historical inequities, biased problem formulations, or optimization criteria that perform differently across demographic groups. Algorithmic bias in credit scoring, hiring, and criminal justice has been extensively documented.
Creative Arts
Generative AI systems can produce images, music, video, and text in response to natural language prompts. These systems raise profound questions about authorship, copyright, and what it means to be creative.
Content moderation: Social media platforms use AI to identify and remove harmful content at a scale that would be impossible for human moderators.
Education: AI tutors adapt instruction to individual students, providing personalized practice and feedback.
Trustworthy AI: The NIST Framework
The U.S. National Institute of Standards and Technology (NIST) developed the AI Risk Management Framework (AI RMF 1.0) to help organizations design, develop, and deploy AI systems responsibly.
The framework defines trustworthy AI as systems that are:
-
Accurate — produce correct outputs for their intended use
-
Reliable — behave consistently across expected conditions
-
Safe — avoid undue harm to users and bystanders
-
Secure and resilient — resistant to adversarial manipulation and hardware failure
-
Explainable and interpretable — decision processes can be understood by relevant stakeholders
-
Privacy-enhanced — protect individual data rights
-
Fair and unbiased — treat affected groups equitably
-
Accountable and transparent — clear lines of responsibility for AI outcomes
- Trustworthy AI
-
An AI system designed and operated to be accurate, reliable, safe, secure, explainable, privacy-preserving, fair, and accountable — the seven characteristics identified by NIST in the AI Risk Management Framework (AI RMF 1.0).
- NIST AI Risk Management Framework (AI RMF)
-
A voluntary framework developed by the National Institute of Standards and Technology (2023) that provides guidance for organizations to identify, assess, and manage AI risks across the full lifecycle of an AI system. Available in the public domain at nist.gov/artificial-intelligence.
The NIST framework organizes AI risk management into four functions: Govern (establish policies and accountability), Map (identify contexts and risks), Measure (assess and analyze risks), and Manage (prioritize and treat risks). These four functions apply throughout the AI lifecycle, from initial conception through deployment and ongoing monitoring.
Applying the NIST Framework: A hiring AI case study
A company deploys an AI resume screening tool to filter job applicants.
-
Govern: Who is accountable if the tool discriminates? Is there an appeals process?
-
Map: What harms could occur? (Rejecting qualified candidates from underrepresented groups; encoding historical hiring biases)
-
Measure: Test accuracy by demographic group; compare against human screener; audit for disparate impact.
-
Manage: If disparate impact is detected, retrain with more representative data, add human review for borderline cases, or discontinue the tool.
The NIST framework does not prohibit using AI in hiring — it asks organizations to assess and manage the risks systematically rather than deploying first and investigating later.
The NIST AI RMF identifies seven characteristics of trustworthy AI. Consider an AI system you interact with regularly — a content recommendation algorithm, a spam filter, or a navigation app.
Which of the seven trustworthy AI characteristics does it seem to satisfy well? Which might it fail on? Does the organization deploying it appear to be following any version of the NIST guidance?
You do not need to know the internal workings of the system — focus on what you can observe from the outside.
Test your understanding of AI applications and the trustworthy AI framework.
AI’s real-world deployment is not just a technical story — it is a story about power, fairness, and accountability. The next section examines the ethical dimensions of AI more directly, including how bias arises, what the EU AI Act requires, and what responsible development looks like.
AI governance content incorporates material from the NIST AI Risk Management Framework (AI RMF 1.0), a U.S. Government work in the public domain.
Original content for CSC 114: Artificial Intelligence I, Central Piedmont Community College.
This work is licensed under CC BY-SA 4.0.