In 1965 Gordon Moore famously predicted that the number of components on an integrated circuit (a computer “chip”) would double every year — Moore’s Law. This largely turned out to be true, but for a long time, we’ve been predicting that we’re reaching the end of it, largely because of laws of physics. To fit more “things” (transistors) on a chip, you have to make those things smaller. We generally talk about the size of transistors in terms of “scale” and we’ve been measuring in nanometers for a few decades now. Back in 2005, some researchers at Intel predicted that we were going to reach the limit at 100nm scale (transistors that were 100nm or smaller in size). Ultimately, we blew right through that, but I wrote back in newsletter #21 that the biggest of the big boys — Intel and Nvidia — agreed, and they’re right: we’re no longer doubling every year. That said, we are still making transistors smaller and Intel is losing. Intel is currently shipping 10nm chips, but there were incessant delays in getting them released. In part, this is what led Apple to switch to its own ARM chips, which are already 7nm. Intel just announced another delay, and doesn’t expect to ship 7nm CPUs until 2023. And it’s not just Apple: Amazon builds their own devices for their AWS cloud service, and they’re building on TSMC’s current 7nm tech. Going forward, while Intel is saying 2023 for 7nm, TSMC is saying they’ll have 3nm in the same timeframe, and even AMD is saying 5nm before the end of 2022. None of this bodes well for Intel.
But aside from Intel getting leap-frogged by the competition, this has deeper implications for the computing industry as a whole, and for those who want to work in computer science, and are getting educated today. Since 2001, every year, the MIT Technology Review has published a list of the 10 most important technological breakthroughs fo the year. Almost all of them have only been possible because of Moore’s Law — because of the increase in processing power. This is true regardless of the industry in which the advance occurred (for example, advances in anti-aging treatments and personalized drugs have only been possible because of increased computing power). If we’re now doubling transistor density every five years instead of every year, that has obvious implications for the speed of technology advancement across the board. And, at some point, we really will run into laws-of-physics problems. What does this mean for innovation? Well, at a minimum, it means that computer programmers are going to have to learn how to program again.