Unlock hundreds more features
Save your Quiz to the Dashboard
View and Export Results
Use AI to Create Quizzes and Analyse Results

Sign inSign in with Facebook
Sign inSign in with Google

Computer Architecture Fundamentals Quiz Practice

Test Hardware Organization and System Design Skills

Difficulty: Moderate
Questions: 20
Learning OutcomesStudy Material
Colorful paper art depicting elements related to Computer Architecture Fundamentals Quiz

Dive into this comprehensive Computer Architecture Quiz designed to sharpen your understanding of CPU design, memory hierarchy, and system-level performance. Ideal for students and educators seeking a practical challenge, this practice quiz can be freely modified in the editor to tailor questions to your curriculum. As you work through each multiple-choice question, you will apply concepts from both Digital Logic and Computer Architecture Quiz frameworks and core fundamentals. Track your progress, deepen your hardware organization skills, and explore more engaging quizzes to reinforce your learning.

Which CPU component performs arithmetic and logical operations?
Arithmetic Logic Unit (ALU)
Control Unit
Register File
Cache
The Arithmetic Logic Unit (ALU) is responsible for performing arithmetic and logical operations. Other components like the Control Unit orchestrate instruction flow rather than execute actual computations.
What is the typical level of cache that is smallest and fastest within the CPU?
L1 cache
L2 cache
L3 cache
Main memory
L1 cache is the smallest and fastest cache closest to the CPU core. Larger levels such as L2 and L3 offer more capacity but have higher latency.
Which architecture uses a single shared memory space for both instructions and data?
Von Neumann architecture
Harvard architecture
Princeton architecture
Modified Harvard architecture
Von Neumann architecture uses one memory space for instructions and data, accessed over the same bus. Harvard architecture, in contrast, uses separate memories and buses for instructions and data.
What is the primary purpose of CPU registers?
Temporarily store data and instructions for fast access
Hold I/O device addresses
Maintain cache coherence
Execute arithmetic operations
CPU registers hold data and instructions that the CPU accesses most frequently, providing the fastest storage. They are not used for cache coherence or direct arithmetic execution, which is handled by the ALU.
What is a system bus in computer architecture?
A set of parallel wires for data, address, and control signals connecting CPU, memory, and I/O
A high-speed dedicated cache inside the CPU
A protocol for network communication
A series of registers within the ALU
A system bus consists of data, address, and control lines that facilitate communication between the CPU, memory, and I/O devices. It is not an internal cache or a network protocol.
In a five-stage instruction pipeline, which stage decodes the instruction and reads registers?
Decode/Register Fetch stage
Instruction Fetch stage
Execute stage
Memory Access stage
The Decode/Register Fetch stage interprets the fetched instruction and reads source operands from the register file. The Execute stage performs the actual computation later.
Which hazard occurs when an instruction depends on the result of a previous instruction?
Data hazard
Control hazard
Structural hazard
Cache hazard
A data hazard arises when one instruction requires data produced by a prior instruction. Control hazards are due to branches, and structural hazards are resource conflicts.
What is a common static branch prediction strategy?
Always predict branch as taken
Use two-bit saturating counters
Predict based on history tables
Use dynamic runtime feedback
A simple static prediction strategy is to always predict that a branch will be taken. More advanced dynamic strategies involve history tables or counters but are not static.
Which feature distinguishes RISC architectures from CISC architectures?
Fixed instruction length
Microcoded complex instructions
Variable-length addressing modes
Hardware-supported complex operations
RISC architectures use fixed instruction lengths to simplify decoding and pipelining. CISC designs often have variable-length, microcoded instructions with complex addressing modes.
In a 4-way set associative cache, each set contains how many cache lines?
4
1
2
8
A 4-way set associative cache has four lines (or ways) in each set. Direct-mapped caches have a single line per set, and higher associativities increase the number of lines.
Which write policy updates both the cache and main memory on every write?
Write-through
Write-back
Write-allocate
Write-around
Write-through policy sends every write operation to both cache and main memory, ensuring consistency. Write-back defers memory updates until eviction.
Which memory component has the highest access latency?
Main memory (DRAM)
L1 cache
L2 cache
Register file
Main memory (DRAM) has higher latency compared to on-chip caches (L1, L2) and registers. Registers are the fastest storage inside the CPU.
What is the purpose of Direct Memory Access (DMA) in I/O operations?
Enable devices to transfer data to memory without CPU intervention
Cache I/O data for faster CPU access
Convert parallel data to serial for buses
Manage virtual memory paging
DMA allows peripheral devices to read/write memory directly without burdening the CPU, improving throughput. It does not handle caching or virtual memory.
Which bus arbitration method assigns control based on fixed time slots?
Time-division multiplexing
Daisy chain arbitration
Centralized parallel arbitration
Token passing
Time-division multiplexing allocates bus access in fixed time slots to different masters. Daisy chain and token passing are sequential arbitration methods.
Which addressing mode specifies the operand's address directly within the instruction?
Direct addressing
Register indirect addressing
Immediate addressing
Indexed addressing
Direct addressing includes the memory address of the operand within the instruction. Immediate addressing embeds the actual data, not an address.
According to Amdahl's Law, what is the theoretical maximum speedup if 30% of a program is inherently serial?
3.33
1.43
10.0
0.30
Amdahl's Law limit is 1 / serial fraction, so 1 / 0.3 ≈ 3.33. This represents the upper bound on speedup with infinite processors.
What is false sharing in multicore cache coherence?
Cores invalidate shared cache lines due to accesses of different words in the same line
When two cores use the same lock incorrectly
Memory pages swapped between cores unpredictably
Cache line loaded in Exclusive state instead of Shared
False sharing occurs when independent threads modify different parts of the same cache line, causing unnecessary invalidations. It wastes coherence bandwidth without actual data conflict.
In a superscalar processor, which feature enables issuing multiple instructions per clock cycle?
Multiple functional units
Larger instruction cache
Deeper pipeline
Branch target buffer
Superscalar designs include multiple parallel functional units so that more than one instruction can execute simultaneously. Cache size or pipeline depth alone do not enable multiple issues per cycle.
Which dynamic scheduling algorithm uses reservation stations and a common data bus for operand availability?
Tomasulo's algorithm
Scoreboarding
In-order issue
Static scheduling
Tomasulo's algorithm uses reservation stations and the common data bus to broadcast operand readiness for dynamic instruction scheduling. Scoreboarding also schedules dynamically but without a common bus design.
In the MESI cache coherence protocol, which state indicates a modified line present only in the current cache?
Modified
Exclusive
Shared
Invalid
The Modified state means the cache line has been changed and is not synchronized with main memory or other caches. Exclusive means clean and private, Shared means potentially in multiple caches.
0
{"name":"Which CPU component performs arithmetic and logical operations?", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"Which CPU component performs arithmetic and logical operations?, What is the typical level of cache that is smallest and fastest within the CPU?, Which architecture uses a single shared memory space for both instructions and data?","img":"https://www.quiz-maker.com/3012/images/ogquiz.png"}

Learning Outcomes

  1. Analyse CPU components and performance trade-offs.
  2. Identify memory hierarchy roles and cache mechanisms.
  3. Evaluate instruction set architecture principles.
  4. Demonstrate understanding of pipelining and hazards.
  5. Apply bus structures and I/O system concepts.
  6. Master parallel processing and concurrency fundamentals.

Cheat Sheet

  1. Understand CPU Components and Performance Trade-offs - Dive into the heart of your computer by exploring the ALU, control unit, and registers that power every calculation. You'll learn how design choices like clock speed, core count, and power consumption shape overall efficiency. Balancing these factors is key to crafting a CPU that can handle heavy tasks without overheating. CPU Microarchitecture
  2. Explore Memory Hierarchy and Cache Mechanisms - Journey through the layers of memory from blazing-fast registers to L1, L2, and L3 caches, all the way down to main RAM. Discover how smart caching policies like write-through and write-back cut access times and keep your programs running smoothly. Get ready to optimize data flow and see why memory hierarchy is the unsung hero of performance. Cache Hierarchy
  3. Evaluate Instruction Set Architecture (ISA) Principles - Unlock the blueprint of CPU commands by studying how ISAs define every operation your processor can execute. Compare the philosophies behind CISC's rich instruction repertoire and RISC's lean, mean instruction set to see how they influence speed and complexity. This foundational knowledge will sharpen your programming and system-design skills. Instruction Set Architecture
  4. Master Pipelining and Hazard Mitigation - Supercharge CPU throughput with pipelining, where multiple instruction stages overlap like a finely choreographed dance. Learn to spot data, control, and structural hazards that can trip up the pipeline and explore techniques like forwarding and branch prediction to keep things flowing. By mastering these tricks, you'll prevent stalls and boost overall performance. CPU Pipelining
  5. Apply Bus Structures and I/O System Concepts - Explore the highways of data transfer that connect the CPU, memory, and peripherals, and understand how bus widths and protocols influence speed. Dive into I/O systems to see how your computer communicates with keyboards, disks, and networks. This knowledge helps you design balanced systems that avoid bottlenecks. Bus Architecture
  6. Grasp Parallel Processing and Concurrency Fundamentals - Harness the power of multi-core and multi-threading to tackle big problems faster than ever. Uncover the challenges of concurrency, including race conditions and deadlocks, and learn synchronization techniques like locks and semaphores. These tools will let you write code that's both speedy and safe. Parallel Computing
  7. Analyze Microarchitecture Design and Its Impact - Delve deeper into how an ISA is brought to life through microarchitecture choices, impacting throughput, power use, and die size. Examine advanced features like superscalar execution, out-of-order completion, and branch prediction for a competitive edge. You'll see how each design tweak can unlock new levels of performance. Microarchitecture Details
  8. Understand the Role of Control Units in CPUs - Meet the brain within the brain: the control unit that orchestrates instruction decoding, sequencing, and execution. Learn how it generates control signals, manages pipelines, and ensures data moves to the right place at the right time. A solid grasp of this component is essential for understanding CPU choreography. CPU Control Unit
  9. Explore the Importance of Clock Cycles and Timing - Unravel how clock cycles act as the metronome of your CPU, dictating when each instruction stage ticks forward. Study factors like clock rate, cycle efficiency, and pipeline depth to understand their impact on overall throughput. Timing mastery helps you predict performance and avoid timing-related bugs. Clock Rate
  10. Study the Evolution of Processor Architectures - Trace the journey from early single-core designs to today's multi-core, heterogeneous processors that juggle CPUs, GPUs, and specialized accelerators. See how shifts in transistor density, power budgets, and workload demands have driven innovation. This historical perspective will spark ideas for future breakthroughs. Processor Architecture History
Powered by: Quiz Maker