52

Is the frequency of a CPU a mean value of about how many clock ticks there are in a second or does it have a more strong, physical stability?

In my opinion, it must be not stable nor unstable. So is there any information available about the variance for a CPU?

Is a CPU's cycle duration strictly synchronized to the crystal vibration? Or does the CPU just have to be sure to achieve a cycle before the next tick?

Gaël Barbin
  • 1,009

5 Answers5

49

Like any complicated thing, you can describe the way a CPU operates at various levels.

At the most fundamental level, a CPU is driven by an accurate clock. The frequency of the clock can change; think Intel’s SpeedStep. But at all times the CPU is absolutely 100% locked to the clock signal.

CPU instructions operate at a much higher level. A single instruction is a complex thing and can take anywhere from less than one cycle to thousands of cycles to complete as explained here on Wikipedia.

So basically an instruction will consume some number of clock cycles. In modern CPUs, due to technologies like multiple cores, HyperThreading, pipelining, caching, out-of-order and speculative execution, the exact number of clock cycles for a single instruction is not guaranteed, and will vary each time you issue such an instruction!

EDIT

is there any information available about the variance for a specific CPU?

Yes and no. 99.99% of end-users are interested in overall performance, which can be quantified by running various benchmarks.

What you're asking for is highly technical information. Intel does not publish complete or accurate information about CPU instruction latency/throughput.

There are researchers who have taken it upon themselves to try figure this out. Here are two PDFs that may be of interest:

Unfortunately it's hard to get variance data. Quoting from the first PDF:

numbers listed are minimum values. Cache misses, misalignment, and exceptions may increase the clock counts considerably.

Interesting reading nevertheless!

misha256
  • 11,543
  • 8
  • 60
  • 70
31

Are CPU clock ticks strictly periodic in nature?

Of course not. Even the very, very best clocks aren't strictly periodic. The laws of thermodynamics say otherwise:

  • Zeroth law: There's a nasty little game the universe plays on you.
  • First law: You can't win.
  • Second law: But you just might break even, on a very cold day.
  • Third law: It never gets that cold.

The developers of the very, very best clocks try very, very hard to overcome the laws of thermodynamics. They can't win, but they do come very, very close to breaking even. The clock on your CPU? It's garbage in comparison to those best atomic clocks. This is why the Network Time Protocol exists.


Prediction: We will once again see a bit of chaos when the best atomic clocks in the world go from 2015 30 June 23:59:59 UTC to 2015 30 June 23:59:60 UTC to 2015 1 July 2015 00:00:00 UTC. Too many systems don't recognize leap seconds and have their securelevel set to two (which prevents a time change of more than one second). The clock jitter in those systems means that the Network Time Protocol leap second will be rejected. A number of computers will go belly up, just like they did in 2012.

22

Around 2000, when clockspeeds of CPUs started to get into the range where mobile phones also operated, it became common to add a variation to the actual clock speed. The reason is simple: If the CPU clock is exactly 900 Mhz, all the electronic interference is generated at that frequency. Vary the clock frequency a bit between 895 and 905 Mhz, and the interference is also distributed over that range.

This was possible because modern CPU's are heat-limited. They have no problem running slightly faster for a short period of time, as they can cool down when the clock is slowed down later.

MSalters
  • 8,283
22

Digital logic designer here. The actual time taken for a logic network to change in response to an input signal is the propagation delay. Think of the system as:

registers A,B,C... ---> logic cloud ---> registers A',B',C'

The "launch clock" is the clock edge at which time the first set of registers change. The "capture clock" is the next clock edge one period later. In order for the system to work the output of the logic cloud has to be stable before the capture clock arrives.

The process of making sure this works is timing analysis. Using a physics-based simulation of the system, work out the worst case arrival time of any input to any output. The largest of these numbers across the system sets the minimum clock period.

Note worst case. The actual propagation time will be shorter, but it depends on manufacturing process variation, current temperature, and chip voltage (PVT). This means in practical terms you can apply a faster clock (overclocking) and it may work. It may also start producing errors, such as deciding that 0x1fffffff + 1 = 0x1f000000 if the carry bit doesn't arrive in time.

Chips may also have more than one clock on board (usually the FSB is slower than the core), and the actual clock may be ramped up or down for thermal control purposes or varied (MSalter's answer about using spread spectrum for passing EMC tests).

pjc50
  • 6,186
2

Is a CPU's instruction duration strictly synchronized to the crystal vibration? Or does the CPU just have to be sure to achieve an instruction before the next tick?

Neither. The instruction duration will be some number of clock ticks, but that number can vary based on the requirements of the instruction. For example, if an instruction can't make forward progress until a particular memory location is in the L1 cache, then the instruction will not be completed before the next clock tick. No forward progress on that instruction will be made until that happens.

But when the CPU does decide to do something, the basic method by which it does it is to set up its internal switches so that a particular piece of information goes to a particular portion of the CPU. The it waits for the input to arrive at that portion and the output to arrive at the next portion. This waiting portion is the purpose of the clock.

Imagine a physical circuit that takes two binary inputs and sums them, outputting the sum on some third set of wires. To do an addition, the CPU must arrange for the two numbers to be added to get to this adder and the outputs to go to, say, a CPU register latch. The CPU can't tell the latch to store the output until the inputs reach the adder, the adder produces the output, and the output reaches the latch. This is the purpose of the clock -- to set the wait time between arranging input to go somewhere and expecting the output to be ready to use.