The Chip Shortage, Giant Chips, and the Future of Moore’s Law

With COVID-19 shaking the global supply chain like an angry toddler with a box of jelly beans, the average person had to take a crash course in the semiconductor industry. And many of them didn’t like what they learned. Want a new car? Tough luck, not enough chips. A new gaming system? Same. But you are not the average person, dear reader. So, in addition to learning why there was a chip shortage in the first place, you also discovered that you can—with considerable effort—fit more than 2 trillion transistors on a single chip. You also found that the future of Moore’s Law depends as much on where you put the wires as how small you make the transistors, among many other things.

So to recap the semiconductor stories you read most this year, we’ve put together this set of highlights:

How and When the Chip Shortage Will End, in 4 Charts

This year you learned the same thing that some carmakers did: Even if you think you’ve hedged your bets by having a diverse set of suppliers, those suppliers—or the suppliers of those suppliers—might all be using the output of the same small set of semiconductor fabs.

To recap: Carmakers panicked and canceled orders at the outset of the pandemic. Then when it seemed people still wanted cars, they discovered that all of the display drivers, power-management chips, and other low-margin stuff they needed had already been sucked up into the work/learn/live-from-home consumer frenzy. By the time they got back in line to buy chips, that line was nearly a year long, and it was time to panic again.

Chipmakers worked flat out to meet demand and have unleashed a blitz of expansion, though most of that is aimed at higher-margin chips than those that clogged the engine of the automotive sector. The latest numbers, from the chip manufacturing equipment industry association SEMI, show sales of equipment set to cross US $100 billion in 2021—a mark never before reached.

As for carmakers, they may have learned their lesson. At a gathering of stakeholders in the automotive electronics supply chain this summer at GlobalFoundries Fab 8 in Malta, N.Y., there was enthusiastic agreement that carmakers and chip makers needed to get cozy with each other. The result? GlobalFoundries has already inked agreements with both Ford and BMW.

Next-Gen Chips Will Be Powered From Below Transistors

You can make transistors as small as you want, but if you can’t connect them to each other, there’s no point. So Arm and the Belgian research institute Imec spent a few years finding room for those connections. The best scheme they found was to take the interconnects that carry power to logic circuits (as opposed to data) and bury them under the surface of the silicon, linking them to a power-delivery network built on the backside of the chip. This research trend suddenly became news when Intel said what sounded like “Oh yeah. We’re definitely doing that in 2025.”

Cerebras’s New Monster AI Chip Adds 1.4 Trillion Transistors

What has 2.6 trillion transistors, consumes 20 kilowatts, and carries enough internal bandwidth to stream a billion Netflix movies? It’s generation 2 of the biggest chip ever made, of course! (And yes, I know that’s not how streaming works, but how else do you describe 220 petabits per second of bandwidth?) Last April, Cerebras Systems topped its original, history-making AI processor with a version built using a more advanced chipmaking technology. The result was a more than doubling of the on-chip memory to an impressive 40 gigabytes, an increase in the number of processor cores from the previous 400,000 to a speech-stopping 850,000, and a mind-boggling boost of 1.4 trillion additional transistors.

Gob-smacking as all that is, what you can do with it is really what’s important. And later in the year, Cerebras showed a way for the computer that houses its Wafer Scale Engine 2 to train neural networks with as many as 120 trillion parameters. For reference, the massive—and occasionally foul-mouthed—GPT-3 natural-language processor has 175 billion. What’s more, you can now link up to 192 of these computers together.

Of course, Cerebras’s computers aren’t the only ones meant to tackle absolutely huge AI training jobs. SambaNova is after the same title, and clearly Google has its eye on some awfully big neural networks, too.

IBM Introduces the World’s First 2-nm Node Chip

IBM claimed to have developed what it called a 2-nanometer node chip and expects to see it in production in 2024. To put that in context, leading chipmakers TSMC and Samsung are going full-bore on 5 nm, with a possible cautious start for 3 nm in 2022. As we reminded you last year, what you call a technology process node has absolutely no relation to the size of any part of the transistors it constructs. So whether IBM’s process is any better than rivals will really come down to the combination of density, power consumption, and performance.

The real importance is that IBM’s process is another endorsement of nanosheet transistors as the future of silicon. While each big chipmaker is moving from today’s FinFET design to nanosheets at their own pace, nanosheets are inevitable.

RISC-V Star Rises Among Chip Developers Worldwide

The news hasn’t all been about transistors. Processor architecture is increasingly important. Your smartphones’ brains are probably based on an Arm architecture, your laptop and the servers it’s so attached to are likely based on the x86 architecture. But a fast-growing cadre of companies, particularly in Asia, are looking to an open-source chip architecture called RISC-V. The attraction is to allow startups to design custom chips without the costly licensing fees for proprietary architectures.

Even big companies like Nvidia are incorporating it, and Intel expects RISC-V to boost its foundry business. Seeing RISC-V as a possible path to independence in an increasingly polarized technology landscape, Chinese firms are particularly bullish on RISC-V. Only last month, Alibaba said it would make the source code available for its RISC-V core.

New Optical Switch Up to 1,000x Faster Than Transistors

Although certain types of optical computing are getting closer, the switch researchers in Russia and at IBM described in October is likely for a computer that’s far in the future. Relying on exotic stuff like exciton-polaritons and Bose-Einstein condensates, the device switched at about 1 trillion times per second. That’s so fast that light would manage only about one third of a millimeter before the device switches again. [READ MORE]