What Is AI Really? Flashcards
Master What Is AI Really? with these flashcards. Review key terms, definitions, and concepts using active recall to strengthen your understanding and ace your exams.
Swipe to navigate between cards
Front
Artificial Intelligence
Back
The field of computer science concerned with creating systems that exhibit behaviors we associate with intelligence, such as learning, reasoning, perception, and problem solving. AI encompasses methods for automating tasks that currently require human cognition, using computational models and algorithms. It spans subareas like machine learning, knowledge representation, planning, and robotics.
Front
Turing Test
Back
A behavioral criterion proposed for judging whether a machine acts indistinguishably from a human in conversation. In the test, an interrogator communicates (typically by text) with a human and a machine; the machine passes if the interrogator cannot reliably tell which is which. The test emphasizes acting humanly rather than explaining internal mechanisms.
Front
Loebner Prize
Back
A modern competition inspired by the Turing Test that awards chatbots judged most human-like in conversational performance. The Loebner Prize provides an annual practical benchmark for conversational AI, with constrained conditions and human judges. It highlights strengths and limitations of contemporary natural language systems.
Front
Chinese Room
Back
John Searle's thought experiment arguing that symbol manipulation alone does not constitute understanding or consciousness. It imagines a person following syntactic rules to manipulate Chinese symbols without understanding their meaning, concluding that systems can appear to understand without genuine semantics. The argument challenges claims that computation by itself yields mental states.
Front
Systems Reply
Back
An objection to the Chinese Room which claims that while the person inside the room does not understand Chinese, the entire system (person plus rule books and procedures) may. Proponents argue understanding can reside at the system level rather than in any single component. The reply shifts focus from individual components to emergent system properties.
Front
Robot Reply
Back
An objection to Searle's Chinese Room that suggests embedding the symbol-manipulating system in a robot with sensors and actuators would provide grounding for meaning. By coupling language with perception and action in the world, the robotic system might acquire genuine understanding. This reply emphasizes the role of embodied interaction in semantics.
Front
Acting Humanly
Back
An approach to AI that evaluates systems by how human-like their behavior is, often operationalized via the Turing Test. The goal is to build agents whose observable actions and responses match those of humans. This perspective focuses on external performance rather than internal cognitive fidelity.
Front
Thinking Humanly
Back
An approach aiming to model the internal mental processes of humans by reproducing cognitive mechanisms revealed by psychology and neuroscience. Systems built under this view are validated by how closely their internal representations and processes match human data. This approach is central to cognitive science and cognitive architectures.
Front
Thinking Rationally
Back
An approach that emphasizes the correctness of reasoning processes according to formal rules and logic. It focuses on producing proofs, arguments, or inferences that follow normative standards of rationality. While foundational to much of AI, it does not capture all intelligent behavior, especially under uncertainty or limited resources.
Front
Acting Rationally
Back
An approach that defines intelligence as choosing actions expected to maximize achievement of the agent’s goals given its beliefs and available information. Rational agents use decision-making procedures to select actions with the best expected outcomes. This pragmatic view underlies many AI formulations such as decision theory and utility maximization.
Front
Rational Agent
Back
An entity that perceives its environment through sensors and acts upon it through actuators to maximize a specified performance measure. A rational agent selects actions that are expected to best achieve its goals given its percept history and knowledge. Rationality depends on the performance metric, the agent’s percepts, and available computational resources.
Front
PEAS
Back
A framework for specifying an agent’s task by listing its Performance measure, Environment, Actuators, and Sensors. PEAS helps formalize what success looks like and what resources and inputs the agent will use. It is useful for designing and comparing agents for different tasks.
Front
Performance Measure
Back
A metric that evaluates how well an agent achieves its objectives over time, guiding rational action selection. The performance measure should capture desired outcomes (e.g., safety, speed, comfort for a taxi driver) and possibly trade-offs. It provides an objective basis for comparing agent behaviors.
Front
Sensors
Back
Components through which an agent perceives its environment, supplying the raw input (percepts) used for decision making. Examples include cameras, microphones, GPS, and tactile sensors for robots, or keyboards and network input for software agents. Sensor quality and coverage strongly influence an agent’s situational awareness.
Front
Actuators
Back
Components an agent uses to affect its environment, implementing chosen actions such as motor commands, display outputs, or network requests. For robots, actuators include wheels, grippers, and speakers; for software agents, actuators may be database updates or web requests. Actuators define how an agent can change the world.
Front
Agent Function
Back
The mapping from percept history to actions that completely determines an agent’s behavior. Conceptually the agent function describes what action the agent takes for any possible sequence of percepts. Practical agents implement this function via algorithms and internal state representations.
Front
Environment
Back
The external world or context in which an agent operates, including other agents, physical conditions, and tasks to be performed. Environments are characterized by properties like observability, stochasticity, dynamics, and whether they are single- or multiagent. Properly modeling the environment is key to designing effective agents.
Front
Observable
Back
A property describing whether an agent can obtain all relevant information about the environment from its sensors at each decision point. Fully observable environments provide complete state information; partially observable ones require memory, inference, or exploration. Observability affects algorithm choice and complexity.
Front
Deterministic
Back
An environment classification where the next state is entirely determined by the current state and the agent’s action, absent randomness. Deterministic environments permit exact prediction of consequences of actions, simplifying planning and search. Stochastic environments introduce randomness requiring probabilistic models and decision theory.
Front
Episodic
Back
An environment type where the agent’s experience breaks into episodes, each composed of a single perception-action pair that does not depend on prior episodes. Episodic settings simplify learning and decision making because actions need not consider long-term consequences. Sequential environments require planning over time because current actions affect future percepts and rewards.
Front
Static vs Dynamic
Back
A classification describing whether the environment can change while the agent is deliberating. Static environments remain unchanged except by the agent, allowing uninterrupted planning; dynamic environments may evolve, requiring reactivity or real-time responsiveness. Time-critical tasks typically require agents that handle dynamic settings.
Front
Discrete vs Continuous
Back
A distinction about the nature of state, time, and actions in an environment. Discrete environments have a finite or countable set of distinct states and actions, while continuous environments involve continuous variables like positions or velocities. This affects representation choices and applicable algorithms.
Front
Single vs Multiagent
Back
A characterization indicating whether an environment contains only the agent itself or multiple autonomous agents whose actions can affect each other. Multiagent settings introduce strategic interaction, cooperation, and competition, often requiring game-theoretic or coordination methods. Single-agent problems focus on optimizing behavior in a fixed environment.
Front
Simple Reflex Agent
Back
A basic agent that selects actions using condition-action rules mapping current percepts directly to actions. These agents are memoryless and can be short-sighted, failing in partially observable or long-horizon tasks. They work well in fully observable, reactive environments with simple mappings.
Front
Reflex Agent with State
Back
An extension of simple reflex agents that maintains internal state to summarize relevant aspects of percept history. By tracking unobserved or past information, such agents can perform better in partially observable environments. Their behavior still relies on rules but augmented by inferred state.
Front
Goal-Based Agent
Back
An agent that selects actions by considering future consequences to achieve explicit goal conditions, using search or planning to find action sequences. Goal-based designs enable flexible behavior because the agent can evaluate different plans against goals. They can be computationally intensive and sensitive to changing goals or environments.
Front
Utility-Based Agent
Back
An agent that uses a utility function to evaluate and compare states, allowing it to trade off competing objectives and degrees of preference. Utility-based agents choose actions that maximize expected utility rather than just achieving boolean goals. This framework supports decision-making under uncertainty and preference-sensitive behavior.
Front
Learning Agent
Back
An agent that improves its performance over time through experience, by updating models, policies, or knowledge representations. Learning agents can adapt to changing environments and tasks, using techniques like supervised learning, reinforcement learning, and model induction. They combine performance, critic, learning element, and problem generator components.
Front
Vacuum Agent
Back
A canonical toy agent that demonstrates basic AI concepts by cleaning rooms based on percepts of location and dirt. It can be implemented as a simple reflex agent, a reflex agent with state, or a goal-based agent, illustrating limits of memoryless rules in partially observable settings. The vacuum world is commonly used to teach PEAS and agent design.
Front
TDGammon
Back
A backgammon-playing system that learned effective play via reinforcement learning and neural networks, demonstrating powerful self-improvement from experience. TDGammon achieved high performance by using temporal-difference learning to update evaluation functions based on game outcomes. It exemplifies learning agents applied to stochastic, multiagent games.
Front
Alvinn
Back
An autonomous-vehicle system that used neural networks to map stereo camera inputs to steering and speed commands for lane keeping. Alvinn demonstrated how learning approaches could handle continuous, partially observable driving tasks. It is an early example of end-to-end perception-to-action learning in robotics.
Front
Talespin
Back
A system for automated story generation that models characters, goals, and actions to produce narrative text. Talespin uses planning and domain knowledge to assemble events and language, aiming for coherent and entertaining stories. It illustrates AI applications in creative text generation and natural language planning.
Front
Webcrawler
Back
A software agent that autonomously traverses the web by fetching pages and following hyperlinks to discover and collect information. Webcrawlers use perception (page content) and simple reasoning or pattern matching to select links of interest, enabling search engines and data-mining applications. They highlight scalable, distributed agent operation.
Front
Xavier Robot
Back
An example mobile delivery robot that combined perception (vision, sonar), navigation, and learning methods such as Markov models and Bayesian classification to deliver mail. Xavier illustrates integrated agent design with sensors, actuators, and diverse reasoning techniques for real-world tasks. It highlights robotics challenges like localization and task planning.
Front
Pathfinder System
Back
A medical diagnosis system focused on hematopathology that uses probabilistic reasoning (Bayesian networks) and simulations to suggest diagnoses and additional tests. Pathfinder demonstrates how AI can operate in partially observable, stochastic, and episodic healthcare environments. It exemplifies domain-specific expert systems and uncertainty handling.
Front
Factory Floor Scheduling
Back
An AI application that orders and allocates tasks to machines to optimize production metrics using constraint satisfaction, planning, and genetic algorithms. Scheduling systems handle combinatorial complexity, resource constraints, and hierarchical task structures. They highlight practical industrial uses of search and optimization techniques.
Front
Agent Components
Back
The modular parts that make up an AI agent: sensors for perception, actuators for action, a representation of state or knowledge, reasoning or learning modules, and a performance measure guiding behavior. Effective agents integrate these components to sense, decide, act, and improve. Design choices depend on the task and environment properties.
Front
Foundations of AI
Back
Disciplines that contributed core ideas to AI, including philosophy (logic and rationality), mathematics (formal languages and probability), economics (decision-making and utility), neuroscience (information processing in brains), psychology (cognition studies), and linguistics (language structure and learning). AI draws on these fields to formulate theories and algorithms for intelligent behavior. Historical milestones include Boole, Aristotle, Smith, and early cognitive science.
Front
Dartmouth Conference
Back
The 1956 workshop widely regarded as the founding event of artificial intelligence as a formal research field. Organized by pioneers such as John McCarthy, Marvin Minsky, Allen Newell, and Herb Simon, the conference proposed using formal computation to model intelligence and spawned early AI research programs. It catalyzed development of languages, architectures, and symbolic methods.
Front
John McCarthy
Back
A founding figure of AI who coined the term 'artificial intelligence' and invented the LISP programming language, influential for symbolic AI and reasoning research. McCarthy advocated formal logical approaches to AI and contributed to early ideas about representation and planning. His work shaped much of AI's early direction.
Front
Marvin Minsky
Back
A pioneering AI researcher who promoted ideas like neural networks, symbolic frame representations, and the 'society of mind' metaphor for intelligence. Minsky explored how collections of simple processes could produce complex cognition and influenced research in both connectionist and symbolic AI. He was a central figure in early AI community-building.
Front
Newell and Simon
Back
Researchers who developed early AI programs like the General Problem Solver and advocated for studying problem solving as search in state spaces. Their work formalized heuristics and search techniques for symbolic reasoning and human-like problem solving. They helped establish computational cognitive modeling as a core AI approach.
Front
Claude Shannon
Back
A foundational information theorist whose work on information and computation influenced early AI approaches like computer game-playing and signal processing. Shannon built early programs for games such as checkers and contributed principles for encoding and transmitting information. His interdisciplinary influence reached AI, communications, and control systems.
Front
Core AI Tasks
Back
The major functional areas within AI, including knowledge representation, search and problem solving, planning, machine learning, natural language processing, uncertainty reasoning, computer vision, and robotics. These tasks provide building blocks for intelligent systems and often interact in practical applications. Research advances in one area often enable progress in others.
Continue learning
Explore other study materials generated from the same source content. Each format reinforces your understanding of What Is AI Really? in a different way.
Create your own flashcards
Turn your notes, PDFs, and lectures into flashcards with AI. Study smarter with spaced repetition.
Get Started Free