The CPU’s Silent Partner: The Coprocessor’s Role Is Often Unappreciated

While coprocessors have taken many forms, the most important one today is the cloud

By Mark Pesce

One reason the PC has endured for nearly 40 years is that its design was almost entirely open: No patents restricted reproduction of the fully documented hardware and firmware. When you bought a PC, IBM gave you everything you needed to manufacture your own clone. That openness seeded an explosion of PC compatibles, the foundation of the computing environment we enjoy today.

In one corner of the original PC’s motherboard, alongside the underpowered-but-epochal 8088 CPU, sat an empty socket. It awaited an upgrade it rarely received: an 8087 floating-point coprocessor.

Among the most complex chips of its day, the 8087 accelerated mathematical computations—in particular, the calculation of transcendental functions—by two orders of magnitude. While not something you’d need for a Lotus 1-2-3 spreadsheet, for early users of AutoCAD those functions were absolutely essential. Pop that chip into your PC and rendering detailed computer-aided-design (CAD) drawings no longer felt excruciatingly slow. That speed boost didn’t come cheap, though. One vendor sold the upgrade for US $295—almost $800 in today’s dollars.

Recently, I purchased a PC whose CPU runs a million times as fast as that venerable 8088 and uses a million times as much RAM. That computer cost me about as much as an original PC—but in 2020 dollars, it’s worth only a third as much. Yet the proportion of my spend that went into a top-of-the-line graphics processing unit (GPU) was the same as what I would have invested in an 8087 back in the day.

Although I rarely use CAD, I do write math-intensive code for virtual or augmented reality and videogrammetry. My new coprocessor—a hefty slice of Nvidia silicon—performs its computations at 200 million times the speed of its ancestor.

That kind of performance bump can only partially be attributed to Moore’s Law. Half or more of the speedup derives from the massive parallelism designed into modern GPUs, which are capable of simultaneously executing several thousand pixel-shader programs (to compute position, color, and other attributes when rendering objects).

Such massive parallelism has its direct analogue in another, simultaneous revolution: the advent of pervasive connectivity. Since the late 1990s, it’s been a mistake to conceive of a PC as a stand-alone device. Through the Web, each PC has been plugged into a coprocessor of a different sort: the millions of other PCs that are similarly connected.

The computing hardware we quickly grew to depend on was eventually refined into a smartphone, representing the essential parts of a PC, trimmed to accommodate a modest size and power budget. And smartphones are even better networked than early PCs were. So we shouldn’t think of the coprocessor in a smartphone as its GPU, which helps draw pretty pictures on the screen. The real coprocessor is the connected capacity of some 4 billion other smartphone-­carrying people, each capable of sharing with and learning from one another through the Web or on various social-media platforms. It’s something that brings out both the best and worst in us. [READ MORE]