Intel is getting leap-frogged

In 1965 Gordon Moore famously predicted that the number of components on an integrated circuit (a computer “chip”) would double every year — Moore’s Law. This largely turned out to be true, but for a long time, we’ve been predicting that we’re reaching the end of it, largely because of laws of physics. To fit more “things” (transistors) on a chip, you have to make those things smaller. We generally talk about the size of transistors in terms of “scale” and we’ve been measuring in nanometers for a few decades now. Back in 2005, some researchers at Intel predicted that we were going to reach the limit at 100nm scale (transistors that were 100nm or smaller in size). Ultimately, we blew right through that, but I wrote back in newsletter #21 that the biggest of the big boys — Intel and Nvidia — agreed, and they’re right: we’re no longer doubling every year. That said, we are still making transistors smaller and Intel is losing. Intel is currently shipping 10nm chips, but there were incessant delays in getting them released. In part, this is what led Apple to switch to its own ARM chips, which are already 7nm. Intel just announced another delay, and doesn’t expect to ship 7nm CPUs until 2023. And it’s not just Apple: Amazon builds their own devices for their AWS cloud service, and they’re building on TSMC’s current 7nm tech. Going forward, while Intel is saying 2023 for 7nm, TSMC is saying they’ll have 3nm in the same timeframe, and even AMD is saying 5nm before the end of 2022. None of this bodes well for Intel.

But aside from Intel getting leap-frogged by the competition, this has deeper implications for the computing industry as a whole, and for those who want to work in computer science, and are getting educated today. Since 2001, every year, the MIT Technology Review has published a list of the 10 most important technological breakthroughs fo the year. Almost all of them have only been possible because of Moore’s Law — because of the increase in processing power. This is true regardless of the industry in which the advance occurred (for example, advances in anti-aging treatments and personalized drugs have only been possible because of increased computing power). If we’re now doubling transistor density every five years instead of every year, that has obvious implications for the speed of technology advancement across the board. And, at some point, we really will run into laws-of-physics problems. What does this mean for innovation? Well, at a minimum, it means that computer programmers are going to have to learn how to program again.

Almost everyone in high-school, anywhere in the world, is learning how to program today. And a lot of that is focused on the thought process of programming — constructing logical flows for instructions that provide specific results. That education is tremendously important. However, the programming is mostly happening in very easy-to-use programming languages; languages like JavaScript or Python. This is mostly fine, and allows nearly everyone to program, and certainly makes high-tech careers possible for many. But these languages tend to be extremely inefficient. Researchers at MIT were able to get 47 times faster computations by writing code in the grand-daddy of programming languages, “C”, instead of in Python. And by going farther, and using specialty chip sets, rather than general purpose CPUs and GPUs, they were able to show calculations that take 7 hours in Python on general-purpose hardware could be executed in 0.41 seconds (7 hours vs. 1/2 a second is … non-trivial improvement). The problem for the software world as a whole is that programming in C, never mind writing code to take specific advantage of hardware-level capabilities in the chipset, is much harder than sitting down and banging out a simple program in Python. In reality, C probably won’t be the answer, because there are better, modern languages that have the same advantage. At the moment, it seems like Rust is winning the battle in the high-end world of computer science. But whether it’s Rust, C++, or C, we’re going to need significantly more sophisticated software engineers in the near future if we’re going to keep up our pace of technological innovation.

Posted in Economy, Newsletter, Technology and tagged , .

Chris Richardson has strong opinions on just about everything. Just ask.