Wanted: A New Microchip For The AI Era
The coming AI revolution faces a big hurdle: today’s microchips as conventional chip design reaches its limits.
It’s one thing to get a bunch of transistors on an integrated circuit to crunch numbers, even very large ones. But what the brain does is far more difficult. Processing vast amounts of visual data for use by huge, multi-cellular organism is very different from the narrow calculations of conventional math.
The algorithms that will drive tomorrow’s autonomous cars, planes, and programs will be incredibly data-intensive, with needs well beyond what conventional chips were ever designed for. This is one reason for the hype surrounding quantum computing and neurosynaptic chips.
That challenge has a sister predicament: the end of Moore’s Law. The integrated circuit revolution that gave birth to the modern computer, smartphone, and basically all of Silicon Valley is in its twilight.
In the 1960s, Gordon Moore observed that the number of transistors per square inch on integrated circuits was doubling roughly 18 months. That won’t be true after 2020, according to Robert Colwell, formerly of the Defense Advanced Research Projects Agency, or DARPA
That poses a big problem for the Defense Department. In 2014, Defense One asked DARPA director Arati Prabhakar about it:
“There’s a $300 billion-a-year global semiconductor industry that cares deeply about the answer of what comes next,” Prabhakar said. Apple, for example, is said to be working on processor devoted specifically to AI-related tasks. But DARPA has money in the game as well.
The magic of the integrated circuit, the reason why it will be so hard to replace, is that it was “a computational unit that you could use to do the broadest possible class of problems,” she said.
The way forward will be building chips for specific purposes. “If you’re willing to work on specialised classes of problems, you can actually get a lot more out of specialised architectures,” she said. “Special architectures will give us many more steps forward.”
Bottom line: there is no silver-bullet replacement for the integrated circuit on the horizon. But you could achieve something Moore’s Law-like by creating chips that could crunch lots of a specific type of data. Some of these already exist; they’re called application-specific integrated circuit chips, or ASICs.
Recently, DARPA announced several new next-generation chip design initiatives meant to build off that approach.
One, Software Defined Hardware, seeks “a hardware/software system that allows data-intensive algorithms to run at near ASICefficiency without the cost, development time or single application limitations associated with ASIC development.”
A second program, Domain-Specific System on a Chip, takes a dual approach, letting architects “mix and match general purpose, special purpose (e.g., ASICs), and hardware accelerator coprocessors, as well as memory and [input/output] elements, into easily programmed [system on a chip] for applications within specific technology domains.”
In many ways, the premise of the program comes again from Moore’s 1965 paper. In this case, it’s his observation that, eventually, “the matching and tracking of similar components in integrated structures will allow the design of differential amplifiers of greatly improved performance.”
“With an eye toward the times we now live in, he laid out the technical directions to explore when the conditions under which scaling will be the primary means for advancement are no longer met,” DARPA program, managers observed, in the Broad Agency Announcement for “Page Three Investments”, an allusion to the actual page in Moore’s paper where the ideas first appear.
The two programs join several others in the Electronics Resurgence Initiative, a $216 million effort to create chip designs for 2030 to 2050.
You Might Also Read: