Back to Explore

Introduction to Artificial Intelligence (CptS 440/540) Flashcards

Master Introduction to Artificial Intelligence (CptS 440/540) with these flashcards. Review key terms, definitions, and concepts using active recall to strengthen your understanding and ace your exams.

20 cards4 views
NotesFlashcardsQuiz
1 / 20
Artificial Intelligence

Click to flip

The field of study focused on creating systems that perform tasks requiring intelligence when done by humans. It combines methods from computer science, cognitive science, mathematics, and other disciplines to model, reason, and act.

Click to flip

Swipe to navigate between cards

Front

Artificial Intelligence

Back

The field of study focused on creating systems that perform tasks requiring intelligence when done by humans. It combines methods from computer science, cognitive science, mathematics, and other disciplines to model, reason, and act.

Front

Turing Test

Back

A behavioral test for human-like intelligence where an interrogator must distinguish between a human and a machine via conversation. If the machine's responses are indistinguishable from a human's, it is said to pass the test.

Front

Chinese Room

Back

A thought experiment by John Searle arguing that symbol manipulation alone does not constitute understanding. It challenges the notion that syntactic processing is sufficient for semantic comprehension in AI.

Front

PEAS

Back

A framework for specifying AI tasks: Performance measure, Environment, Actuators, Sensors. It helps precisely define what an agent is expected to do and what resources it has.

Front

Rational Agent

Back

An agent that chooses actions expected to maximize achievement of its goals according to a performance measure. Rationality depends on the agent’s knowledge, percepts, and available actions.

Front

Simple Reflex Agent

Back

An agent that selects actions based solely on the current percept using condition-action rules. It is fast but can be short-sighted or fail in partially observable environments.

Front

Reflex Agent with State

Back

A reflex agent that maintains internal state to summarize past percepts and handle partially observable environments. The state helps the agent make better decisions than pure reflex agents.

Front

Goal-Based Agent

Back

An agent that reasons about future actions and selects sequences that achieve specified goals. It allows planning and deliberation but can be more computationally intensive.

Front

Utility-Based Agent

Back

An agent that uses a utility function to assign numeric values to states and chooses actions that maximize expected utility. This supports trade-offs among competing objectives.

Front

Learning Agent

Back

An agent that can improve its performance over time by acquiring new knowledge or adapting behavior based on experience. Learning can target performance, models of the world, or representation discovery.

Front

Fully Observable

Back

An environment property where the agent’s sensors provide complete and accurate information about the current state. Many idealized games like chess are treated as fully observable.

Front

Partially Observable

Back

An environment property where the agent’s sensors provide incomplete or noisy information about the state. Real-world tasks like taxi driving are often partially observable.

Front

Deterministic Environment

Back

An environment where the next state is entirely determined by the current state and the agent’s action. Deterministic settings simplify planning and prediction.

Front

Stochastic Environment

Back

An environment where state transitions or observations are probabilistic, often due to noise or other agents. Stochasticity requires probabilistic reasoning and robust decision-making.

Front

Episodic Task

Back

A task structure where each episode consists of independent percept-action pairs and decisions do not depend on previous episodes. Episodic tasks simplify learning because history is irrelevant.

Front

Sequential Task

Back

A task where current decisions affect future percepts and rewards, requiring the agent to consider long-term consequences. Most real-world problems are sequential.

Front

Static vs Dynamic

Back

Static environments do not change while an agent deliberates, whereas dynamic environments do. Dynamic settings demand faster or continual updating of plans and beliefs.

Front

Discrete vs Continuous

Back

Discrete environments have countable sets of states and actions; continuous environments have uncountable ranges (e.g., real-valued positions). Algorithm choices differ significantly between these types.

Front

Multiagent Environment

Back

An environment containing other agents whose actions affect outcomes, often making the setting strategic. Planning must account for interactions, cooperation, or competition.

Front

Loebner Prize

Back

An annual competition inspired by the Turing Test that awards chatbots based on their ability to appear human in conversation. It highlights practical challenges of conversational AI but is often criticized for superficiality.

Continue learning

Explore other study materials generated from the same source content. Each format reinforces your understanding of Introduction to Artificial Intelligence (CptS 440/540) in a different way.

Create your own flashcards

Turn your notes, PDFs, and lectures into flashcards with AI. Study smarter with spaced repetition.

Get Started Free