MIPS (millions of instructions per second) was commonly used to gauge computer system performance up until the 1980s. Explain why it can be a very poor measure of a processor’s performance. Are there any circumstances under which it is a valid measure of performance? If so, describe those circumstances.
This is a poor measure because different types of instructions take different type such as floating-point instructions tend to take more time so a program containing more floating-point instruction but a lesser number in total will perform poorer as compared to other programs which contains smaller instruction but more in number so according to MIPS, it is performing better.
It can be a valid measure when the instructions used are of similar types then we can say that the perfomance can be compared on the basis of the number of instructions executed per second.
Get Answers For Free
Most questions answered within 1 hours.