Back to Explore

Unit 1 — Computer Organization & Memory — Concise Study Notes Summary & Study Notes

These study notes provide a concise summary of Unit 1 — Computer Organization & Memory — Concise Study Notes, covering key concepts, definitions, and examples to help you review quickly and study effectively.

1.6k words4 views
Notes

🏫 Course Overview & Core Concepts

Computer Architecture vs Computer Organization: Architecture specifies attributes visible to the programmer — instruction set, data widths, addressing modes, and I/O mechanisms. Organization is how those features are implemented in hardware — control signals, memory technology, interfaces.

🧩 Main Components of a Computer

CPU, Memory, I/O devices, and System buses form the fundamental parts. The CPU contains the ALU (arithmetic & logic), Control Unit (CU), and various registers. Memory is split into primary (RAM/ROM/cache) and secondary (disks, SSDs). I/O modules act as interfaces between peripherals and CPU/memory.

⚙️ CPU Functions & Internal Structure

CPU must: fetch instructions, interpret them, fetch/process data, and write results. Key elements: ALU, temporary storage (registers), and mechanisms to move data (system bus).

  • ALU operations: addition, subtraction (with borrow/carry), increment/decrement, two's complement, and bitwise logical ops (AND, OR, XOR, NOT). Also shifts (arithmetic/logical) and rotates.
  • Registers: small, fast storage. Types: user-visible (general-purpose, address/data registers) and control/status registers (PC, IR, MAR, MBR, PSW). Trade-offs exist between many general-purpose registers (flexibility) and special-purpose registers (compact instruction encoding).

🧠 Architectural Models

  • Von Neumann: single shared memory for instructions and data; simpler but subject to memory bandwidth/contention (instruction and data share the same bus).
  • Harvard: separate memories and buses for instructions and data; can fetch instructions and data simultaneously — common in RISC microcontrollers and DSPs.

🔌 Peripherals & I/O Challenges

Peripherals vary widely in speed, format, and mechanical vs electronic behavior. Because most are slower than CPU/RAM, I/O modules and buffering are required to match rates and formats.

✅ Key Takeaways

Understand what the programmer sees (architecture) vs how hardware implements it (organization). Know CPU responsibilities, ALU capabilities, register roles, and the distinction between Von Neumann and Harvard architectures.

🔁 Instruction Set, Programs & Instruction Elements

An instruction = opcode (operation) + operand references (where data lives). The processor's instruction set is the complete set of instructions it can execute. A program is a sequence of these instructions.

🧭 Operand Types & Addressing

Operands may reside in registers, be immediate constants, in memory (main/cache), or come from I/O. Effective address computation is required for memory operands.

🛠️ Instruction Classification

  • By function: data processing, data storage, data movement (I/O), program flow control.
  • By address count: 0-, 1-, 2-, 3-address formats (more addresses in instruction → fewer instructions but larger and more complex encodings).

🔎 Control Unit & Micro-operations

The Control Unit (CU) orchestrates instruction sequencing, register transfers, and microinstructions (elementary steps like reading a register or enabling ALU). To design CU behavior: (1) define processor elements, (2) list micro-operations, (3) decide control actions to trigger them.

⏱️ Instruction Cycle (Subcycles & States)

Typical instruction subcycles: Fetch → Decode → (Indirect) → Execute → Interrupt handling. Key microstates include:

  • IF (Instruction Fetch): PC → MAR → memory read → MBR → IR; PC updated to next instruction.
  • IOD (Instruction Operation Decode): decode opcode and determine addressing modes and operands.
  • OAC (Operand Address Calculation): compute effective addresses for memory or I/O operands.
  • OF (Operand Fetch) / DO (Data Operation) / OS (Operand Store).

🧾 Registers used in the Cycle

  • PC: address of next instruction.
  • IR: holds current instruction.
  • MAR: memory address register (address placed on bus).
  • MBR/MDR: memory buffer/data register (data read/written).

▶️ Data Flow Examples

  • Instruction fetch: PC → MAR; memory read → MBR → IR; increment PC.
  • Indirect cycle: If instruction uses indirect addressing, contents of MBR (an address) → MAR → memory read to get operand address.

✅ Key Takeaways

Know the instruction cycle steps and the role of CU and micro-operations. Be able to map which registers participate during fetch, decode, indirect, execute, and store phases.

💾 Memory Types & Roles

Memory stores programs and data. Not all data must be in CPU at once, so systems use a hierarchy of storage to balance capacity, cost, and speed. Internal (primary) memory is directly accessible by CPU (RAM, ROM, cache). External (secondary) memory is for long-term storage (HDD, SSD, flash, optical).

📦 Memory Characteristics

Important attributes: location, capacity, unit of transfer, access method, access time, cycle time, data transfer rate, physical type, volatility, and organization.

🔁 Access Methods

  • Sequential (e.g., tape): access depends on position.
  • Direct/block (e.g., disk): jump to block then sequential search.
  • Random (e.g., RAM): constant-time access independent of location.
  • Associative (e.g., cache tag lookup): locate by content comparison.

🧱 RAM Organization & Operations

RAM is organized as addressable words/bytes. Address bus selects location; data bus carries the word. Read: CPU places address, asserts read, data flows to CPU. Write: CPU places address/data, asserts write, data stored.

SRAM (Static RAM)

SRAM uses cross-coupled inverters (flip-flops) — typically 6 transistors per cell. It is fast, non-refreshing (non-destructive reads), but costly and lower density. Typical uses: CPU caches, register files, high-speed buffers.

DRAM (Dynamic RAM)

DRAM stores data as charge on a capacitor (1 transistor + 1 capacitor per cell). It is high density and low cost per bit, but volatile and requires periodic refresh because charges leak. Read is typically destructive (sense amplifiers restore data).

ROM & Variants

ROM is non-volatile and holds firmware. Variants include:

  • Mask ROM: factory programmed.
  • PROM: programmable once by user (fuses).
  • EPROM: erasable by UV light and reprogrammable.
  • EEPROM: electrically erasable/programable (byte-wise).
  • Flash: block-erasable, widely used in SSDs, USB drives, cameras; limited write/erase cycles but non-volatile and fast relative to disk.

✅ Key Takeaways

Match memory type to use: SRAM for speed (cache), DRAM for capacity (main memory), ROM/Flash for persistent firmware or storage. Understand refresh needs and trade-offs of density vs speed vs cost.

🧭 Memory Hierarchy — Purpose & Trade-offs

Memory hierarchy arranges storage levels so that as you move closer to the CPU, speed and cost per bit increase while capacity decreases. Typical levels: Registers → L1/L2 Cache → Main Memory (DRAM) → Disk/SSD → Tape. The hierarchy works well if accesses to slower levels are much less frequent.

⚡ Cache Basics

Cache is a small, fast memory sitting between CPU and main memory to reduce average access time. On a CPU memory request: check cache → if found (hit) return quickly; if not (miss) fetch block from lower memory, place into cache, then deliver to CPU.

🧾 Cache Structure & Terminology

Cache consists of data storage (blocks/lines) and tag memory that stores metadata identifying which main-memory block is present. Key metrics:

  • Hit rate = hits / (hits + misses).
  • Miss rate = 1 − hit rate.
  • Hit time: time to access cache (includes tag check).
  • Miss penalty: additional time if miss (fetch from lower level + replace block).

Typical values: L1 hit time ~ 1 cycle, L2 ~ 2 cycles; L1 miss rate often a few percent.

🧭 Cache Design Parameters

Design choices include: cache size, block (line) size, mapping (placement) strategy, associativity, replacement algorithm (LRU, FIFO, random, LFU), and write policy (write-through vs write-back). Larger cache reduces miss rate but increases cost and lookup time.

✅ Key Takeaways

Understand cache purpose (reduce average memory access time), how tags identify blocks, and the main performance metrics (hit/miss rates, hit time, miss penalty). Cache design balances capacity, speed, and complexity.

🧩 Cache Mapping Techniques — Placement Strategies

Cache placement strategies determine where a main-memory block can reside in cache:

  • Direct-mapped: each main-memory block maps to exactly one cache line (Index = block_number MOD number_of_slots). Simple and fast but prone to conflict misses.
  • Fully associative: a block can be placed anywhere in cache; requires searching all tags (or parallel comparators). Low conflict misses but complex and expensive.
  • Set-associative (n-way): compromise — cache divided into sets; a block maps to exactly one set and can occupy any of the nn lines within that set. Index = block_number MOD number_of_sets.

📐 Address Structure (Direct Mapping)

Split a memory address into fields: [Tag | Line(Index) | Word]. If main memory block address space uses ss bits and word offset uses ww bits, and cache has rr line bits, then:

  • Word field = ww bits
  • Line (slot) field = rr bits
  • Tag = srs - r bits

Example: For main memory size = 16 MB (2242^{24} bytes) and block size = 4 B, block address bits are s=22s = 22. If cache has 2142^{14} lines, then line bits r=14r = 14 and tag bits = sr=8s - r = 8.

🔁 Replacement & Write Policies

On a miss where cache is full, a replacement policy chooses a victim (LRU, FIFO, random). For writes:

  • Write-through: write to cache and main memory immediately (simpler coherence, higher memory traffic).
  • Write-back: update cache only and mark line dirty; write to main memory when the line is evicted (reduces memory writes but needs dirty-bit management).

➕ Pros & Cons of Direct Mapping

  • Advantages: very simple and inexpensive to implement; quick index computation.
  • Disadvantages: poor hit ratio under conflict-heavy access patterns (two frequently used blocks that map to the same line will thrash).

✅ Key Takeaways

Be able to explain direct, fully associative, and set-associative mapping, compute address-field sizes (Tag, Index, Offset\text{Tag},\ \text{Index},\ \text{Offset}), and understand replacement/write policy trade-offs. Choose associativity to balance conflict misses vs implementation complexity.

Sign up to read the full notes

It's free — no credit card required

Already have an account?

Create your own study notes

Turn your PDFs, lectures, and materials into summarized notes with AI. Study smarter, not harder.

Get Started Free
Unit 1 — Computer Organization & Memory — Concise Study Notes Study Notes | Cramberry