Instruction Pipelining & Pipeline Hazards MCQ Questions and Answers

1. In instruction pipelining, the main goal is to:
A. Increase latency
B. Decrease throughput
C. Increase instruction throughput
D. Reduce instruction size

2. The performance of a pipelined processor depends primarily on:
A. Clock frequency
B. Number of ALUs
C. Number of pipeline stages and hazards
D. Size of cache memory

3. A pipeline is said to be balanced when:
A. Each stage has unequal delay
B. Each stage takes approximately equal time
C. Some stages are skipped
D. No buffers are used

4. Ideal speedup in a pipeline with 5 stages is:
A. 2
B. 4
C. 5
D. 10

5. In a 4-stage pipeline, if one stage takes longer, the pipeline speed is determined by:
A. Average stage time
B. Slowest stage time
C. Fastest stage time
D. None

6. The term “pipeline stall” means:
A. Instruction decoding
B. Temporary halt in pipeline execution
C. Instruction prefetching
D. Branch prediction

7. A 6-stage pipeline ideally completes 30 instructions in:
A. 35 cycles
B. 35 cycles (5 fill + 29 execution)
C. 30 cycles
D. 36 cycles

8. The first instruction in a pipeline completes after:
A. 1 cycle
B. k cycles (where k = no. of stages)
C. 2 cycles
D. n cycles

9. Which component allows multiple instructions to overlap in execution?
A. Cache
B. Pipeline registers
C. ALU
D. Control unit

10. Structural hazards occur when:
A. Two instructions require the same hardware resource
B. Incorrect data is fetched
C. Branch prediction fails
D. None

11. A hazard in pipelining refers to:
A. Cache miss
B. Condition preventing next instruction execution in next cycle
C. Memory fault
D. Interrupt request

12. Hazards are classified into:
A. Structural, Control, Power
B. Structural, Data, Control
C. Instructional, Logical, Data
D. Decode, Execute, Writeback

13. A data hazard occurs when:
A. Instructions overlap
B. An instruction depends on the result of a previous instruction
C. Memory is slow
D. A register is unused

14. Control hazards arise due to:
A. ALU overflow
B. Branch or jump instructions
C. Cache conflict
D. Interrupts

15. Structural hazards can be avoided by:
A. Data forwarding
B. Duplicating hardware resources
C. Stalling pipeline
D. Branch prediction

16. A RAW hazard stands for:
A. Write-After-Write
B. Read-After-Read
C. Read-After-Write
D. Write-After-Read

17. WAR hazard means:
A. Write-After-Read
B. Read-After-Write
C. Write-After-Write
D. None

18. WAW hazard stands for:
A. Write-After-Write
B. Write-After-Read
C. Read-After-Write
D. Write-And-Write

19. RAW hazard can be resolved using:
A. Branch prediction
B. Data forwarding (bypassing)
C. Instruction reordering only
D. Register renaming

20. WAR and WAW hazards are generally found in:
A. Out-of-order execution pipelines
B. In-order pipelines
C. RISC processors only
D. Non-pipelined CPUs

21. Pipeline stall is also known as:
A. Interrupt
B. Bubble
C. Clock delay
D. Register delay

22. Stalling reduces:
A. Latency
B. Throughput
C. Memory size
D. Instruction width

23. Data forwarding works by:
A. Reordering code
B. Using intermediate pipeline register outputs as inputs
C. Using cache memory
D. Adding wait cycles

24. Data forwarding eliminates:
A. Some data hazards
B. Structural hazards
C. Control hazards
D. All hazards

25. Stalls are inserted by:
A. ALU
B. Cache
C. Pipeline control logic
D. Compiler only

26. The penalty of a branch misprediction is measured in:
A. Bytes
B. Instructions
C. Clock cycles
D. Cache hits

27. The effect of a stall is equivalent to:
A. Flushing pipeline
B. Inserting NOP instructions
C. Reordering code
D. Reducing clock rate

28. When data forwarding is not possible, the solution is:
A. Insert pipeline stalls
B. Skip instruction
C. Branch predict
D. Change ISA

29. The hazard detection unit monitors:
A. Instruction cache
B. Pipeline register dependencies
C. Clock frequency
D. ALU flags

30. Branch delay slot means:
A. Delay in data hazard
B. Instruction slot executed after a branch regardless of outcome
C. Cache delay
D. Forwarding delay

31. Control hazard is also called:
A. Data hazard
B. Branch hazard
C. Resource hazard
D. Memory hazard

32. Branch penalty is:
A. Number of cycles lost due to branch
B. Total instruction time
C. Cache miss delay
D. Data hazard delay

33. A static branch prediction method uses:
A. Fixed rule at compile time
B. Dynamic table update
C. Cache history
D. None

34. Dynamic branch prediction relies on:
A. Compiler optimization
B. Runtime behavior of branches
C. Instruction prefetching
D. None

35. A 1-bit branch predictor fails for:
A. Single loop
B. Loop with even iterations
C. Nested loops
D. Unconditional branches

36. A 2-bit branch predictor can:
A. Predict all branches
B. Avoid single misprediction in repeated branches
C. Never fail
D. Reduce memory hazards

37. Branch target buffer (BTB) stores:
A. Instruction operands
B. Target addresses of previously taken branches
C. Control signals
D. ALU flags

38. Branch prediction improves:
A. Latency
B. Pipeline efficiency
C. Cache speed
D. Memory bandwidth

39. Flushing the pipeline occurs when:
A. Branch misprediction happens
B. Cache miss
C. Data hazard occurs
D. Forwarding fails

40. Delayed branching is used to:
A. Increase stalls
B. Reduce branch penalty
C. Eliminate RAW hazards
D. Simplify ALU design

41. Pipeline throughput is defined as:
A. Time per instruction
B. Number of instructions completed per unit time
C. CPI × Clock rate
D. None

42. Latency of pipelined execution equals:
A. 1 clock cycle
B. Sum of all stage times
C. Fastest stage time
D. None

43. Ideal CPI (Cycles Per Instruction) in pipelining is:
A. 1
B. 0
C. Equal to number of stages
D. Infinite

44. CPI increases due to:
A. Clock speed
B. Pipeline stalls and hazards
C. Cache hit
D. None

45. Speedup of pipeline =
A. Non-pipelined time / Pipelined time
B. Pipelined / Non-pipelined
C. 1 / (Number of stages)
D. None

46. If each stage takes 10 ns, a 5-stage pipeline runs at:
A. 2 ns
B. 10 ns per stage clock
C. 50 ns per instruction
D. 100 ns

47. For N instructions and k-stage pipeline, total time ≈
A. (N+k)T
B. (k + N – 1) × T
C. (N/k)T
D. N×T

48. Pipeline efficiency (%) =
A. (Throughput × 100) / Ideal throughput
B. CPI × 100
C. (Clock/Time)
D. None

49. The first instruction experiences no speedup because:
A. It stalls
B. Pipeline is empty initially
C. Control hazard
D. Cache miss

50. Instruction pipelining improves:
A. Instruction accuracy
B. CPU throughput
C. Latency
D. Memory hierarchy

51. Superscalar processors issue:
A. One instruction per cycle
B. Multiple instructions per cycle
C. One instruction per two cycles
D. None

52. Pipeline interlock means:
A. Register renaming
B. Automatic stall to prevent hazards
C. Cache lock
D. Clock freeze

53. Scoreboarding helps in:
A. Dynamic hazard detection
B. Static branch prediction
C. Cache mapping
D. Data forwarding

54. Tomasulo’s algorithm eliminates:
A. Structural hazards
B. Data hazards via register renaming
C. Control hazards
D. Branch misprediction

55. Dynamic scheduling allows:
A. Fixed instruction order
B. Out-of-order execution
C. Only scalar pipelines
D. No dependency tracking

56. Pipeline flush clears:
A. All instructions after a mispredicted branch
B. Cache
C. Register file
D. Clock buffer

57. Instruction-level parallelism (ILP) is improved by:
A. Decreasing dependencies among instructions
B. Adding stalls
C. Slowing clock
D. Reducing cache

58. Structural hazard can be reduced by:
A. Duplicating execution units
B. Using branch prediction
C. Cache bypass
D. Data forwarding

59. Out-of-order execution reduces:
A. Clock frequency
B. Pipeline stalls due to hazards
C. Control hazards
D. Cache size

60. Register renaming removes:
A. False data dependencies
B. True dependencies
C. Structural conflicts
D. Branch hazards

61. When two instructions use the same ALU simultaneously, it causes:
A. Structural hazard
B. Data hazard
C. Control hazard
D. None

62. When a load instruction is followed by an immediate use of loaded data, it causes:
A. Control hazard
B. Data hazard (load-use delay)
C. Structural hazard
D. None

63. Pipeline hazard detection unit stalls:
A. Always
B. Randomly
C. Only when dependency detected
D. Never

64. Compiler scheduling can:
A. Increase stalls
B. Reduce pipeline stalls
C. Cause hazards
D. None

65. The “bubble” in pipeline represents:
A. Idle pipeline stage
B. Extra instruction
C. Memory write
D. Cache miss

66. Control hazard resolution is easier when:
A. Branch target known early
B. Branch target delayed
C. Branch condition resolved early
D. Branch ignored

67. A delay slot helps by:
A. Using instruction after branch to do useful work
B. Increasing stalls
C. Stopping forwarding
D. None

68. A load-use hazard can be reduced by:
A. Reordering instructions
B. Branch prediction
C. Register renaming
D. Pipeline flush

69. Instruction pipeline clock is determined by:
A. Average stage
B. Longest stage delay
C. Fastest stage
D. Instruction width

70. Flushing reduces:
A. Pipeline efficiency
B. Clock speed
C. Cache hits
D. Memory delay

71. For a 4-stage pipeline, 10 instructions complete in how many cycles (ideal)?
A. 14
B. 13
C. 13 (k + N – 1)
D. 10

72. With branch penalty of 3 cycles, 10% branches, CPI =
A. 1.3
B. 0.9
C. 1
D. 2

73. Speedup = 5 / (1 + 0.2×5) →
A. 5
B. 2
C. 2.78
D. 1

74. Efficiency of 4-stage pipeline executing 20 instructions =
A. (20/24) × 100 = 83.3%
B. 90%
C. 75%
D. 60%

75. Ideal CPI of 1 becomes actual CPI of 1.4 due to:
A. Pipeline stalls
B. More stages
C. Cache hit
D. Higher clock

76. Pipeline buffer registers store:
A. Instruction results
B. Intermediate stage outputs
C. Control signals only
D. None

77. Increasing number of stages can:
A. Increase throughput up to limit
B. Always increase latency
C. Reduce hazards
D. Eliminate branches

78. Too many pipeline stages can:
A. Always help
B. Increase branch penalties
C. Reduce performance always
D. Remove data hazards

79. Non-uniform stage delays cause:
A. Balanced performance
B. Pipeline inefficiency
C. Reduced clock
D. None

80. Pipelining cannot be applied to:
A. Completely sequential tasks
B. Arithmetic operations
C. ALU functions
D. RISC CPUs

81. Instruction fetch and decode can overlap because:
A. Cache allows
B. They use different hardware units
C. Same register used
D. None

82. Instruction prefetch improves:
A. Latency
B. Pipeline utilization
C. Clock speed
D. Cache coherence

83. Pipeline depth means:
A. Number of stages
B. Instruction size
C. Clock cycles
D. Memory units

84. CPI > 1 in pipeline indicates:
A. Perfect performance
B. Stalls or hazards present
C. Fast execution
D. Structural duplication

85. Pipeline hazards directly reduce:
A. Latency
B. Throughput
C. Cache hit
D. Register use

86. Pipeline speedup saturates due to:
A. Cache
B. Branch frequency and hazards
C. Clock drift
D. Power limit

87. Structural hazard prevention costs:
A. None
B. More hardware
C. Fewer registers
D. More hazards

88. Dynamic hazard resolution happens at:
A. Compile time
B. Runtime
C. Decode
D. Prefetch

89. Pipelining achieves parallelism in:
A. Instruction level
B. Task level
C. Thread level
D. Process level

90. When two dependent instructions are executed without stall, result is:
A. Incorrect execution
B. Perfect execution
C. Faster performance
D. Reduced CPI

91. Control dependency refers to:
A. Data relation
B. Execution depends on branch decision
C. Structural sharing
D. None

92. A hazard that occurs because of limited buses is:
A. Structural hazard
B. Data hazard
C. Branch hazard
D. None

93. Pipeline flushing wastes:
A. Memory
B. Completed partial instruction cycles
C. Registers
D. Cache

94. Compiler-level optimization for pipeline hazards is:
A. Instruction scheduling
B. Register renaming
C. Forwarding
D. Prediction

95. Hazards can be completely eliminated by:
A. Cache
B. None (only reduced or avoided)
C. Branch prediction
D. Data forwarding

96. CPI = 1 + Stall cycles per instruction represents:
A. Cache efficiency
B. Pipeline performance formula
C. Memory delay
D. Latency equation

97. In pipelining, throughput increases while:
A. Latency remains almost same
B. Latency decreases
C. CPI increases
D. None

98. Instruction reordering is used to:
A. Stall pipeline
B. Reduce hazards
C. Simplify ALU
D. Increase delay

99. Hazards can never occur if:
A. Cache hit
B. Instructions are independent
C. Clock is fast
D. Branches are removed

100. Pipelining efficiency is maximum when:
A. Frequent branches occur
B. No hazards or stalls exist
C. Stage times differ
D. Forwarding fails