Disclaimer: I'm not sure the appropriate SE to put this in, so if Super User is the wrong place I apologize. I realize this is a very broad and probably very complex answer, but how is it that every year/ two years CPU and in general computer engineers are able to improve the performance of said part? Today, the performance increases are more in efficiency rather than raw megahertz, I get that, but even then, how is said efficiency increased? The main thing that confuses me about it is how quickly new designs are made. I would think that efficiency-increasing ideas are hard to come by, so how is it that people have enough to release new generations as fast as they do?
1 Answers
The simple answer is that we don't see year-on-year improvements, so the premise isn't quite correct.
Keep in mind that the release cadence is for business reasons, not technical: there may not be significant improvement, but consumers expect a yearly release, so that's what they do.
The more complex answer is that there are many aspects to CPU performance:
- The microarchitecture, which affects:
- How quickly it can process specific instructions (instructions per cycle), which varies by instruction.
- How quickly it can process sequences of instructions (things like pipelining, branch prediction, caching, etc.)
- Which specialised instructions are supported (things like AES-NI, which hugely speeds up encryption, SIMD [SSE, AVX, etc.], which hugely speeds up large data tasks like image processing, etc.)
- See more: https://superuser.com/a/906227/117590
- The clock speed, which affects how many cycles you get per second. This has largely stalled but we're still trying to work out efficiency to get higher clocks without frying the CPU or requiring too much cooling.
- The number of cores, which affects how many independent instruction streams can be processed at once. This is, again, limited by efficiency. See also: https://superuser.com/a/797486/117590
The now-old tick-tock model shows how this was handled in the past: one year, you'd see a microarchitecture improvement, then the next you'd see a "die shrink", increasing efficiency by using a smaller process size. While the die shrink was happening on the previous microarchitecture, the next-generation one could be worked on at the same time. More recently this is slowing down, because we're running out of little improvements to squeeze out on both the architectural and process size fronts.
For example, the recent Intel Coffee Lake generation had minimal improvements over Kaby Lake, which itself had minimal improvements over Skylake. The architecture itself has remained more or less the same, with some minor improvements in SIMD instructions and side improvements like in the memory controller. The headline change, if any, would be marginally higher clock speeds ... from the efficiency gains in the manufacturing process. Coffee Lake's headline change was an increase in core counts, likely largely for marketing purposes (competition with AMD).
Rarely, we also see large jumps, as with the old Intel Core and recent AMD Zen architectures. There are many design teams working on different architectures in parallel, and occasionally when the primary one's advancement stalls another architecture using different ideas can "take over" (Core replaced Netburst, Zen replaced the Excavator series).
And then outside of the desktop CPU world, we're seeing a huge push for power efficiency for battery-powered devices like laptops and tablets. That's been the headline feature of a lot of new architectures: they're not necessarily faster, but they're more efficient so your battery lasts longer.
- 63,170