Memory Hierarchy MCQ Questions and Answers

1. What is the primary purpose of a memory hierarchy in computer systems?
A) To increase power consumption
B) To organize storage by cost and speed
C) To standardize instruction sets
D) To encrypt memory contents
Answer: B

2. Which memory type is typically the fastest?
A) Hard disk drive (HDD)
B) Magnetic tape
C) SRAM (on-chip cache)
D) DRAM (main memory)
Answer: C

3. Which of the following is non-volatile auxiliary memory?
A) SRAM
B) DRAM
C) Flash memory
D) CPU registers
Answer: C

4. Content-addressable memory is another name for:
A) Virtual memory
B) Associative memory
C) Secondary memory
D) Cache memory
Answer: B

5. A cache miss that occurs because the required data is not in the cache is called a:
A) Hit
B) Compulsory miss
C) Capacity miss
D) Coherence miss
Answer: B

6. Which mapping policy allows any block to go into any cache line?
A) Direct mapping
B) Fully associative mapping
C) Set-associative mapping with 1-way
D) No mapping
Answer: B

7. In a direct-mapped cache, the mapping from memory block to cache line is determined by:
A) Block number modulo number of cache lines
B) Random choice each access
C) Tag only
D) CPU clock speed
Answer: A

8. The purpose of a tag field in a cache line is to:
A) Store user data
B) Identify which memory block is stored in that cache line
C) Increase cache size
D) Control power gating
Answer: B

9. Which cache write policy immediately writes every write to both cache and main memory?
A) Write-back
B) Write-around
C) Write-through
D) Write-once
Answer: C

10. Which write policy requires a dirty bit to track modified cache lines?
A) Write-through with no-write-allocate
B) Write-back (copy-back)
C) Read-through
D) Write-once
Answer: B

11. Virtual memory allows:
A) Execution without a CPU
B) Programs to use more memory than physical RAM
C) Memory to be accessed without addresses
D) Files to be compressed automatically
Answer: B

12. A page table entry typically contains which of the following?
A) Page size only
B) Cache line number
C) Frame number and status bits (valid/dirty/reference)
D) Entire process code
Answer: C

13. The Translation Lookaside Buffer (TLB) is used to:
A) Store file system metadata
B) Reduce overhead of virtual-to-physical address translation
C) Cache disk blocks
D) Manage I/O devices
Answer: B

14. The effective access time (EAT) for a system with TLB can be reduced primarily by:
A) Increasing disk size
B) Reducing TLB hit ratio
C) Increasing TLB hit ratio
D) Removing cache memory
Answer: C

15. Which page replacement algorithm is provably optimal but requires future knowledge?
A) FIFO
B) LRU (Least Recently Used)
C) Optimal (Belady’s) algorithm
D) Random replacement
Answer: C

16. The CLOCK page replacement algorithm approximates which algorithm?
A) Optimal
B) FIFO
C) LRU
D) Random
Answer: C

17. In set-associative cache with associativity k, each set contains:
A) k cache lines
B) k bytes only
C) 1 cache line always
D) infinite lines
Answer: A

18. A cache block is also often called a:
A) Tag
B) Frame
C) Line or cache line
D) Page directory
Answer: C

19. When a processor issues a write and the block is not in cache and we allocate the block in cache on a write, the policy is called:
A) No-write-allocate (write-around)
B) Write-back
C) Write-allocate (write-fetch)
D) Write-through
Answer: C

20. Which of these is true about DRAM compared to SRAM?
A) DRAM is faster and more expensive than SRAM
B) DRAM is slower and denser (cheaper per bit) than SRAM
C) Both have identical speed and cost
D) SRAM needs refresh cycles while DRAM does not
Answer: B

21. The term “memory hierarchy” primarily exploits which two characteristics of memory technologies?
A) Voltage and current
B) Capacity and speed (latency) trade-off
C) Color and shape
D) File system types
Answer: B

22. Auxiliary memory is typically used for:
A) Register allocation
B) Long-term storage and backups
C) CPU instruction decoding
D) Cache tag storage
Answer: B

23. A fully associative cache of N lines requires tag comparison complexity of:
A) Constant (O(1)) with single comparator for all lines
B) O(N) comparators in hardware (parallel compare) or sequential checks
C) O(log N) always
D) Zero comparisons
Answer: B

24. Which of the following is NOT an advantage of cache memory?
A) Reduces average memory access time
B) Lowers effective memory latency for CPU
C) Eliminates need for virtual memory
D) Exploits locality of reference
Answer: C

25. Temporal locality refers to:
A) Accessing data with spatial neighbors
B) Reusing the same memory location within a short time interval
C) Processing local variables only
D) Files accessed on the same disk sector
Answer: B

26. Spatial locality refers to:
A) Accessing memory locations close to each other in address space
B) Reusing the same variable repeatedly
C) Time-of-day dependent memory access
D) Using physical maps for addresses
Answer: A

27. Which flag indicates a page has been modified since it was loaded into memory?
A) Valid bit
B) Dirty (modified) bit
C) Access bit
D) Executable bit
Answer: B

28. A TLB miss requires:
A) Immediate process termination
B) A page table lookup to translate virtual to physical address
C) Flushing the cache
D) Rebooting the system
Answer: B

29. The usual relationship between cache size and hit rate is:
A) Larger cache size generally increases hit rate (all else equal)
B) Larger cache size always decreases hit rate
C) Cache size has no impact on hit rate
D) Hit rate is inversely proportional to cache associativity only
Answer: A

30. The main advantage of multilevel caches (L1, L2, L3) is to:
A) Increase instruction width
B) Balance fast small caches near CPU and larger slower caches further away
C) Remove the need for main memory
D) Reduce CPU clock speed
Answer: B

31. In paging, internal fragmentation occurs because:
A) Pages are different sizes
B) Last page of process may be only partially used
C) Segments overlap
D) Disk blocks are smaller than pages
Answer: B

32. In segmentation (not paging), memory is divided into:
A) Fixed-size pages
B) Variable-size logical segments based on program structure
C) Cache blocks only
D) Disk cylinders
Answer: B

33. The base and limit register pair is mainly used for:
A) Virtual memory address translation through TLB
B) Simple memory protection and relocation in contiguous allocation systems
C) Cache replacement algorithm
D) Disk scheduling
Answer: B

34. Which memory is typically implemented using magnetic platters?
A) SRAM
B) DRAM
C) Hard disk drive (HDD) auxiliary memory
D) Flash memory
Answer: C

35. The miss penalty is defined as:
A) Time on hit
B) Additional time required to service a miss (including fetching block)
C) Number of cache lines
D) Page size in bytes
Answer: B

36. Which of the following is a capacity miss?
A) Miss occurring on first access to a block
B) Miss because cache is too small to hold all active blocks
C) Miss because two blocks map to same line (conflict)
D) Miss due to coherence in multiprocessor system
Answer: B

37. Which is a conflict miss?
A) Miss occurring due to limited associativity where multiple blocks map to same set
B) Miss caused by power failure
C) Miss when reading disk sequentially
D) Miss that never happens
Answer: A

38. A write-back cache updates main memory:
A) Immediately upon every write
B) Never — only on special commands
C) Only when a modified (dirty) block is evicted from cache
D) Only on reads
Answer: C

39. Virtual memory page size is typically chosen to be:
A) Extremely small (1 byte) for efficiency
B) Large enough to amortize page fault overhead but small enough to limit internal fragmentation (e.g., 4KB typical)
C) Exactly equal to disk sector always
D) Unlimited
Answer: B

40. Which hardware checks are commonly used for memory protection?
A) Base and limit registers; page table valid bits; privilege levels
B) Only file permissions
C) CPU clock frequency checks
D) Temperature sensors
Answer: A

41. In a paging system, page fault occurs when:
A) The page is in main memory already
B) The referenced page is not present in main memory (invalid/in secondary store)
C) The TLB has high hit ratio
D) Cache line is dirty
Answer: B

42. Thrashing occurs when:
A) System has excessive cache hits
B) System spends more time swapping pages in/out than executing processes due to insufficient physical memory
C) Disk is full but memory is free
D) CPU temperature is high
Answer: B

43. Which of the following is NOT a page replacement algorithm?
A) LRU (Least Recently Used)
B) FIFO (First-In First-Out)
C) Best-Fit memory allocation
D) OPT (Optimal)
Answer: C

44. The dirty bit is primarily needed to:
A) Track whether a page has been referenced
B) Track whether a page needs to be written back to disk upon eviction
C) Mark read-only pages
D) Indicate page size
Answer: B

45. A multi-level page table is used to:
A) Increase page size
B) Reduce memory consumed by page tables for sparse address spaces
C) Replace the TLB entirely
D) Manage cache associativity
Answer: B

46. Which cache coherence problem arises in multiprocessor systems?
A) Different caches holding different values of same memory location leading to inconsistencies
B) Increasing TLB hit rate
C) Faster main memory reads
D) Increased clock speed
Answer: A

47. The MESI protocol is used for:
A) Memory encryption
B) Cache coherence in multiprocessors
C) Page replacement policies
D) Disk scheduling
Answer: B

48. Where is the processor’s highest-priority, lowest-latency memory located?
A) L3 cache
B) Main memory (DRAM)
C) Registers inside CPU
D) Hard disk
Answer: C

49. Which of the following best describes associative memory search?
A) Search by address only
B) Search by content, returning the address or data if match found
C) Sequential disk search
D) Directory listing search
Answer: B

50. Translation Lookaside Buffer entries typically store:
A) Virtual page number to physical frame mappings and status bits
B) Entire process code
C) Disk sector numbers only
D) Cache line contents
Answer: A

51. Which of the following increases the size of the virtual address space without increasing physical memory?
A) Adding more CPU cores
B) Using a larger page size only
C) Virtual memory (paging/segmentation)
D) Reducing cache size
Answer: C

52. The “memory wall” problem refers to:
A) Processors being too slow compared to memory
B) Growing gap between CPU speed and memory access latency
C) Excessive memory fragmentation
D) Physical wall near the server room
Answer: B

53. In a write-back cache, when a block is modified and later read by another processor, the system needs to ensure:
A) The block is written back to disk immediately
B) Cache coherence ensures other caches see the most recent value via protocol (e.g., invalidate/flush)
C) Nothing — stale values are acceptable
D) The block is deleted
Answer: B

54. Which statement about cache line size (block size) is true?
A) Larger block size always improves performance
B) Too large block size may increase miss penalty and reduce effective utilization due to false spatial locality
C) Block size has no effect on cache behavior
D) Block size must equal page size
Answer: B

55. The main difference between cache memory and main memory is:
A) Cache is slower and larger
B) Cache is faster and smaller, located closer to CPU
C) Main memory is on-chip and cache is off-chip
D) They are identical in design
Answer: B

56. A hardware-managed cache uses:
A) Software interrupts to manage tags
B) Dedicated cache controller logic in hardware to perform tag check and data fetch
C) Manual programmer intervention for each access
D) Disk drivers
Answer: B

57. Which of the following is a property of associative (content-addressable) memory?
A) Low hardware cost for large arrays compared to conventional RAM
B) Allows searching all stored words simultaneously for a match
C) Only supports sequential search
D) Cannot be used for translation tasks
Answer: B

58. In virtual memory systems, swapping refers to:
A) Exchanging two registers
B) Moving entire processes or pages between main memory and secondary storage
C) Changing CPU instruction set
D) Rotating cache sets
Answer: B

59. The page table base register (PTBR) contains:
A) The size of the page
B) The physical address where the page table starts in memory
C) Cache coherence protocol name
D) Disk partition index
Answer: B

60. Which of the following mitigation techniques reduces TLB misses during context switches?
A) Flushing the entire TLB on every context switch (no mitigation)
B) Using process-specific address space identifiers (ASIDs) or tagged TLB entries
C) Increasing disk speed
D) Using smaller pages only
Answer: B

61. A victim cache is used to:
A) Replace main memory usage entirely
B) Hold recently evicted lines from a small cache to reduce conflict misses
C) Store process metadata permanently
D) Increase disk fragmentation
Answer: B

62. The term “associativity” in cache design refers to:
A) Number of CPUs in the system
B) Number of lines per set in set-associative cache
C) Power consumption of each line
D) Disk RPMs
Answer: B

63. If a cache has block size 64 bytes and 1024 sets, how many unique addresses map to the same set (ignoring tag)?
A) 64
B) 1024
C) All addresses whose block number mod 1024 are equal — many (depends on memory size)
D) 1 only
Answer: C

64. The benefit of write-back caches over write-through caches is:
A) Simplicity in keeping memory consistent
B) Fewer memory writes on repeated updates, improving performance
C) No need for dirty bits
D) They require no cache coherence protocol
Answer: B

65. The “locality principle” that justifies caches is based on:
A) Programs occupying only local disk partitions
B) Programs tend to access a relatively small portion of their address space at any period (temporal and spatial locality)
C) Memory locations being physically close in the building
D) All programs using identical data
Answer: B

66. Which hardware structure is responsible for mapping virtual addresses to physical addresses?
A) ALU
B) Memory Management Unit (MMU)
C) Hard disk controller
D) Cache tag array
Answer: B

67. In a two-level cache hierarchy, L1 is typically:
A) Larger and slower than L2
B) Smaller and faster than L2
C) Located on disk
D) Absent in modern CPUs
Answer: B

68. Which page replacement algorithm gives second chance to recently used pages by checking a reference bit?
A) Optimal algorithm
B) FIFO only
C) Second-chance (CLOCK) algorithm
D) Random replacement
Answer: C

69. In write-allocate caches, a store that misses the cache will:
A) Be discarded
B) Cause the block to be loaded into cache then modified
C) Only update main memory and not cache
D) Halt the processor
Answer: B

70. The size of a page table for a single-level paging system depends on:
A) Number of processes only
B) Size of virtual address space and page size (number of pages)
C) Disk speed
D) Cache associativity
Answer: B

71. Which of the following reduces the number of page faults for a given process?
A) Decreasing physical memory allocated to the process
B) Increasing the working set (i.e., allocating more frames)
C) Using smaller pages always
D) Flushing CPU registers frequently
Answer: B

72. A direct-mapped cache of 8KB with block size 64B has how many lines?
A) 8
B) 128
C) 64
D) 1024
Answer: B
(8 KB / 64 B = 8192 / 64 = 128)

73. Which technique is commonly used to reduce disk I/O when servicing paging?
A) Increasing disk rotational latency
B) Using clustering and prefetching (read-ahead) of adjacent pages
C) Deleting swap spaces
D) Using single-level cache only
Answer: B

74. The term “page frame” refers to:
A) A fixed-size block of secondary storage only
B) A fixed-size block of physical memory that holds a page
C) Cache line metadata
D) CPU instruction format
Answer: B

75. In an operating system using paging, the page size should ideally be:
A) As small as possible to reduce fragmentation only
B) Balanced to reduce page fault overhead but limit internal fragmentation
C) Equal to disk block size always
D) Irrelevant to performance
Answer: B

76. Which of the following is true about multilevel caches and inclusivity?
A) Inclusive cache hierarchy means L1 contents are a subset of L2 contents
B) Exclusive cache hierarchy means L1 and L2 always contain identical lines
C) Inclusive means no duplicate lines are allowed in caches
D) Inclusivity only applies to virtual memory
Answer: A

77. A cache line replacement policy chooses which existing cache line to evict when:
A) There is free space in cache
B) The CPU is idle
C) A new block must be placed in a full set; typical policies include LRU, FIFO, Random
D) The operating system requests it not to
Answer: C

78. Which of the following describes demand paging?
A) All pages of process are preloaded before execution
B) Pages are loaded into memory only when first referenced
C) Pages are loaded sequentially from disk irrespective of need
D) Pages are never loaded
Answer: B

79. The term “page table entry valid bit = 0” usually indicates:
A) The page is present in physical memory and ready to use
B) The page is not currently in physical memory or access is illegal
C) The page contains only zeros
D) The page size is zero
Answer: B

80. Which cache miss classification is caused by data being replaced and later re-referenced due to limited cache capacity?
A) Compulsory miss
B) Capacity miss
C) Conflict miss
D) Coherence miss
Answer: B

81. Which of the following accelerates virtual-to-physical address translation the most?
A) Increasing disk bandwidth
B) Using a larger page table only
C) Using a TLB (Translation Lookaside Buffer) with high hit rate
D) Decreasing CPU frequency
Answer: C

82. For a 32-bit virtual address with 4 KB pages, the page offset field width is:
A) 12 bits
B) 32 bits
C) 4 bits
D) 1024 bits
Answer: A
(4 KB = 2^12, so offset = 12 bits)

83. Which replacement algorithm uses a stack algorithm property and can be implemented with counters but is expensive?
A) FIFO
B) LRU (Least Recently Used) accurate implementation
C) Random
D) Optimal with future knowledge
Answer: B

84. The primary role of the cache controller is to:
A) Manage disk I/O operations
B) Coordinate cache operations, tag comparison, and data transfer to/from CPU and memory
C) Encrypt memory contents
D) Schedule processes for CPU
Answer: B

85. Which is a disadvantage of very large page sizes?
A) Lower TLB coverage
B) Increased internal fragmentation and slower page transfer on page faults
C) Faster page table lookup always
D) Reduced DMA overhead always
Answer: B

86. The benefit of using small pages includes:
A) Reduced external fragmentation in physical memory allocations (but possibly higher page table overhead)
B) Increased page fault overhead always
C) Always decreased TLB misses
D) Eliminates need for a TLB
Answer: A

87. In a write-through no-write-allocate policy, a store miss will:
A) Allocate the block in cache and update it
B) Update main memory directly without loading the block into cache
C) Cause the CPU to halt
D) Flush entire cache
Answer: B

88. The associativity of a direct-mapped cache is:
A) 0-way
B) 1-way
C) equal to number of sets
D) N-way where N equals number of lines
Answer: B

89. Which component is essential to support virtual memory at the hardware level?
A) Disk controller firmware only
B) MMU with page table support and TLB optional for performance
C) GPU with dedicated VRAM
D) Cache write buffer only
Answer: B

90. The “dirty bit” being set in a page/frame means:
A) Page has not been accessed recently
B) Page is modified and must be written to swap on eviction
C) Page is read-only
D) Page is executable only
Answer: B

91. Which of the following is true about access time ordering in typical systems (fastest to slowest)?
A) Disk > Main memory > Cache > Registers
B) Registers > L1 cache > L2 cache > Main memory > Disk
C) Main memory > L1 cache > Registers > Disk
D) Disk > Registers > Cache > Main memory
Answer: B

92. Which memory is primarily used to store frequently used instructions and data for immediate CPU access?
A) Swap file
B) Cache memory (L1/L2)
C) Magnetic tape
D) External cloud storage
Answer: B

93. The term “page table walk” refers to:
A) Traversing the multi-level page table entries in memory to translate a virtual address when TLB misses occur
B) Walking through cache lines sequentially
C) Scanning the disk directory
D) Compiling the page table into code
Answer: A

94. Which of the following can be used to reduce the size of page tables?
A) Using single-level page tables only
B) Multi-level page tables or hashed/inverted page tables for large address spaces
C) Increasing page size to extreme values only
D) Removing virtual memory entirely
Answer: B

95. In a system with write-back cache, when is the main memory updated with written data?
A) On every CPU write instruction
B) When the cache line is evicted and the dirty bit is set
C) Never updated
D) Only during system reboot
Answer: B

96. The purpose of a cache prefetcher is to:
A) Evict data faster
B) Predict and load data into cache before the CPU requests it to reduce miss latency
C) Replace the TLB entirely
D) Compress cache contents
Answer: B

97. Which of the following best describes inverted page table?
A) A page table indexed by virtual page numbers for each process, large for big address spaces
B) A global page table indexed by physical frames that contains mapping to virtual page and process id, saving memory for large virtual spaces
C) A page table that is inverted on disk only
D) A cache replacement policy
Answer: B

98. The purpose of the “reference bit” (accessed bit) in page table entries is to:
A) Indicate whether the page has been read or written recently, useful for replacement algorithms
B) Indicate page size
C) Show cache line size
D) Mark pages as executable only
Answer: A

99. Which of the following decreases average memory access time when the cache hit rate is high?
A) Increasing miss penalty
B) Increasing the clock frequency only
C) Having a larger and faster L1 cache with good hit rate and low latency
D) Reducing physical memory size
Answer: C

100. A process’s working set is defined as:
A) Set of pages referenced by the process during a given time window, representing its current locality of reference
B) All pages in the virtual address space regardless of use
C) Only the process’s stack area
D) The set of CPU registers only
Answer: A