0

I'm working on core frequency scaling experiments and I see that with the increase in core frequency (all other stuffs untouched, for example ddr frequency is kept same), I see decrease in IPC values. Can someone explain why is it so?.

Example:

CF 1GHz, IPC is 1

CF 2GHZ, IPC is 0.95

CF 3GHZ, IPC is 0.92

srccode
  • 101

2 Answers2

1

My only assessment can be that your measurement is flawed and probably includes some (more or less) static overhead that your calculations are not accounting for. Consider this super simplified attempt to solve this:

Cycles Flawed IPC number Instruction count Static overhead “Actual” IPC
1 1 1 0.111 0.889
2 0.95 1.9 0.111 0.8945
3 0.92 2.76 0.111 0.883

…wherein “Actual IPC” is (Instruction count - Static overhead) / Cycles

That looks much more likely, doesn’t it? All this is further complicated by the fact that, unless you are performing these tests on bare metal, there’s not just your test process. This creates further complication due to not all instructions being created equal. Some require more cycles to execute and some fewer.

Maybe your method to capture the cycle counter is what introduces this static overhead.

user219095
  • 65,551
0

Seems a very logical outcome :

When the core frequency (clock) goes up while all other parameters stay the same, core speed does not change. CPU operations then just take more time, when counted in cycles. This means that the instructions per cycle (IPC) is lower.

Note that IPC is a very imprecise measure, as explained in the above Wikipedia link.

See also the post Why can't you have both high instructions per cycle and high clock speed?

harrymc
  • 498,455