The chips are down for Moore’s law

Feb 17, 2016

Photo credit: Rebecca Mock

By M. Mitchell Waldrop

Next month, the worldwide semiconductor industry will formally acknowledge what has become increasingly obvious to everyone involved: Moore’s law, the principle that has powered the information-technology revolution since the 1960s, is nearing its end.

A rule of thumb that has come to dominate computing, Moore’s law states that the number of transistors on a microprocessor chip will double every two years or so — which has generally meant that the chip’s performance will, too. The exponential improvement that the law describes transformed the first crude home computers of the 1970s into the sophisticated machines of the 1980s and 1990s, and from there gave rise to high-speed Internet, smartphones and the wired-up cars, refrigerators and thermostats that are becoming prevalent today.

None of this was inevitable: chipmakers deliberately chose to stay on the Moore’s law track. At every stage, software developers came up with applications that strained the capabilities of existing chips; consumers asked more of their devices; and manufacturers rushed to meet that demand with next-generation chips. Since the 1990s, in fact, the semiconductor industry has released a research road map every two years to coordinate what its hundreds of manufacturers and suppliers are doing to stay in step with the law — a strategy sometimes called More Moore. It has been largely thanks to this road map that computers have followed the law’s exponential demands.

Not for much longer. The doubling has already started to falter, thanks to the heat that is unavoidably generated when more and more silicon circuitry is jammed into the same small area. And some even more fundamental limits loom less than a decade away. Top-of-the-line microprocessors currently have circuit features that are around 14 nanometres across, smaller than most viruses. But by the early 2020s, says Paolo Gargini, chair of the road-mapping organization, “even with super-aggressive efforts, we’ll get to the 2–3-nanometre limit, where features are just 10 atoms across. Is that a device at all?” Probably not — if only because at that scale, electron behaviour will be governed by quantum uncertainties that will make transistors hopelessly unreliable. And despite vigorous research efforts, there is no obvious successor to today’s silicon technology.

Continue reading by clicking the name of the source below.

4 comments on “The chips are down for Moore’s law

  • Where next? I read every once in a while of radical new technologies promising at least an order of magnitude of density. 3D?

    It seems to me, even if you did nothing to improve density, you could still put 4 times as much on a CPU by putting four chips inside.

    Report abuse

  • @ Roedy, It seems to me, even if you did nothing to improve density, you could
    still put 4 times as much on a CPU by putting four chips inside.

    Yes its called a quad core processor. There is a diminishing return in throughput as the number of cores is increased. Also as the size gets bigger propogation delays in the circuit will limit the performance.
    Heat disipation is worse if you try to stack the chips up so you gradually reach a circuit density limit.
    Cpu clock speed is also limited by the basic fact that signals cannot travel across the circuit faster than the speed of light. At 3 Ghz an electromagnetic wave only goes about 100mm in each clock cycle (1ns) and many thousands of transistors will have to switch in this time. At 30 Ghz you will only just get the signal across a 10mm chip each clock cycle. So somewhere between 3 and 30Ghz you start to hit fundamental limits.
    If you try to get round this by making the circuit smaller then you hit the problem of quantum effects in not having a large enough statistical sample of charge carriers within each transistor. So they start to behave randomly.

    Report abuse

  • It’s already talking ages to program for the number of threads we have on average today, software development needs to drastically change as we get closer to the 1nm mark otherwise full utilization can not be achieved, even today we can pull so much power with optimized code on the low level on relatively old hardware (2-3 years). Seems code will be the restriction not so much the hardware.

    Report abuse

Leave a Reply

View our comment policy.