Brain-inspired computing could solve energy-efficient AI puzzle
Getting your Trinity Audio player ready...
|
There’s much to celebrate in using waste heat from data centers to warm corporate headquarters, university buildings, council offices, and thousands of homes. But the fact that there’s so much heat to go around points to a major issue with conventional computer processors – their power consumption. And while the current race to build even more powerful artificial intelligence (AI) systems has the potential to revolutionize the way that we work, the energy demands of these computationally intensive endeavors raise concerns. Fortunately, brain-inspired computing architectures could come to the rescue by shrinking AI’s carbon footprint.
To understand why, it’s worth recapping what we know about the human brain and delving into how it’s able to perform somewhere in the region of 1000 trillion operations per second, while having the energy consumption of a dim lightbulb. Our brains contain billions of neurons, and each one of those electrically excitable cells can connect to thousands of other neurons, which enables trillions of synapses. And those junctions between individual cells can get stronger or weaker, weighting the calculations being performed.
Biology beats silicon
The brain’s plasticity – a hardening and softening of nodes in the network so that different inputs trigger different outputs – is represented crudely by the weights and biases in deep neural networks. But what biology can achieve using 20W (the estimated power consumption of a fully developed human brain) requires much more energy when attempts are made to recreate those thought patterns using silicon chips.
For example – when IBM ran its “human scale” synapse simulation in 2012, it required a state-of-the-art supercomputer consuming megawatts of power. Another way of picturing the inefficiencies of using silicon is to consider that Go grandmaster Lee Sedol’s brain is 50,000 times as energy efficient as the racks of CPU’s and GPU’s needed by Deepmind’s Alpha Go to beat him in their 5-match contest held in 2016.
This demand for power ramps up rapidly as more GPUs are deployed. Industry watchers have noted that while GPU performance has soared, so has power consumption. In 2012, units drew around 25 W of power, whereas today’s designs – with superior processing capabilities – consume several hundred watts. And advances in generative AI, which make heavy use of powerful GPUs, have sent energy demands along a much steeper path.
But, as mentioned, developers are exploring other approaches to using conventional CPU and GPU designs. Several years ago IBM and its partners revealed a neuro-synaptic platform dubbed TrueNorth with 1 million neurons, 256 million synapses, and 4096 parallel and distributed neural cores. And the power consumption? Just 70 mW.
In-memory advantages
The system highlights the potential of brain-inspired computing designs to unlock much more energy-efficient AI development. Efficiency gains are realized as computing and storage can be co-located. “You don’t have to establish communication between logic and memory; you just have to make appropriate connections between the different neurons,” explains Manuel Le Gallo – a member of IBM’s In-Memory Computing group. “In conventional computing, whenever you want to perform a computation, you must first access the memory, obtain your data, and transfer it to the logic unit, which returns the computation. And whenever you get a result, you have to send it back to the memory.”
Artificial neurons within that brain-inspired computing logic can be thought of as accumulators, capable of integrating multiple inputs. And those neurons will fire when a certain threshold is reached, informed by the number and strength of those signals. Another benefit of neuromorphic systems – computing designs that take inspiration from the structure of the brain – is their capacity to process noisy inputs, which has advantages in signal-processing.
Reducing the energy budget required to train algorithms on large data sets – a necessary step in activating the full powers of deep neural networks – could put new product opportunities on the roadmap. Brain-inspired computing chips with energy-efficient AI performance open the door to utilizing smaller, more portable devices rather than just having to rely on large infrastructure available in the cloud.
The industry trend for application-specific integrated circuits designed to meet the rising demands of AI can be seen with chips such as Intel’s Loihi 2, Google’s TPUs and Apple’s A16, which the iPhone maker says features a 16-core neural engine. But it’s not just tech giants that are moving forwards in this space. BrainChip is offering what it dubs ‘smart edge silicon’ that exploits neuromorphic computing benefits to lower the energy cost of AI.
And then there are the possibilities of advanced materials beyond silicon to consider too, which have the potential to take brain-inspired computing to a new level. Hybrain brings together a wide range of industry and research partners to explore structures capable of processing both light and electric signals. The photonic structures provide the ability to read in information, which can then be fed into an in-memory analog computing electronic system.
Brain-inspired computing is gathering pace as the energy limits of conventional silicon is being made more apparent by the scramble to build even bigger, more capable generative AI models. And the good news is that the energy rewards put on the table by in-memory processing have the potential to reset commercial AI development along a more sustainable path.