Artificial Brains & Ultra-Efficient Computing

by Priyanka Patel

Brain-inspired computers are proving surprisingly adept at complex math, rivaling traditional supercomputers in efficiency, according to new research from Sandia National Laboratories. This suggests a future where processing power isn’t limited by energy consumption.

Beyond AI: Neuromorphic Computing’s Unexpected Strength

The human brain, operating on roughly 20 watts, effortlessly handles vast amounts of sensory data. Researchers are now unlocking similar potential in silicon.

  • Neuromorphic computers, modeled after the human brain, aren’t just for artificial intelligence—they excel at complex mathematical problems.
  • Sandia National Laboratories has been a key player in developing and testing these systems, utilizing chips from Intel, SpiNNaker, and IBM.
  • A new algorithm, NeuroFEM, allows neuromorphic computers to efficiently solve partial differential equations (PDEs), crucial for scientific computing.
  • Intel’s Loihi 2 systems demonstrate up to 2.5 times the efficiency of modern GPUs, with even greater gains reported by SpiNNaker2 systems.

For decades, scientists have strived to replicate the brain’s remarkable efficiency in computers, a field known as neuromorphic computing. Sandia National Laboratories has been at the forefront of this effort, deploying systems from Intel, SpiNNaker, and IBM over the past several years. But the latest findings reveal a capability beyond simply accelerating artificial intelligence and machine learning—these chips are surprisingly versatile.

What kind of computations does the brain perform so efficiently? Researchers James Aimone and Brad Theilman explained in a recent Sandia news release that even seemingly simple actions, like hitting a tennis ball, involve incredibly sophisticated calculations. “Pick any sort of motor control task — like hitting a tennis ball or swinging a bat at a baseball. These are very sophisticated computations. They are exascale-level problems that our brains are capable of doing very cheaply,” Aimone explained.

A paper recently published in the journal Nature Machine Intelligence details how Sandia researchers developed a novel algorithm for efficiently tackling a class of problems called partial differential equations (PDEs) on neuromorphic computers, including Intel’s Loihi 2 neurochips. PDEs are fundamental to modeling complex phenomena—from electrostatic forces to fluid dynamics and radio wave propagation—and typically demand the full power of modern supercomputers.

Did you know? Neuromorphic computing aims to mimic the brain’s structure and function, potentially leading to dramatically more energy-efficient computing.

While still in its early stages, neuromorphic computing is already showing impressive efficiency gains. Intel’s Loihi 2 systems, deployed at Sandia’s Hala Point and Oheo Gulch facilities, reportedly achieve 15 TOPS (trillions of operations per second) per watt, approximately 2.5 times the efficiency of GPUs like Nvidia’s Blackwell chips. The SpiNNaker2-based system, deployed last summer, claims an even more substantial 18x performance-per-watt improvement over modern GPUs.

However, programming these brain-inspired chips isn’t straightforward. The unique in-memory compute architecture often requires researchers to devise entirely new algorithms. The Sandia team addressed this challenge with NeuroFEM, an algorithm implementing the finite element method (FEM) for solving PDEs on spiking neuromorphic hardware. Importantly, this wasn’t purely theoretical; the researchers successfully solved PDEs using Intel’s Oheo Gulch system, which features 32 Loihi 2 neurochips.

Testing revealed near-ideal “strong scaling,” meaning doubling the core count halved the solution time. While subject to Amdahl’s Law—which limits the parallelization of workloads—NeuroFEM demonstrated 99 percent parallelizability. The paper’s authors also argue that the algorithm simplifies the programming process for neuromorphic systems. “An important benefit of this approach is that it enables direct use of neuromorphic hardware on a broad class of numerical applications with almost no additional work for the user,” they wrote. “The user friendliness of spiking neuromorphic hardware has long been recognized as a serious limitation to broader adoption and our results directly mitigate this problem.”

The researchers speculate that transitioning to analog-based neuromorphic systems—Loihi 2 is currently digital—could unlock even faster and more energy-efficient solutions for complex PDEs. However, they acknowledge that neuromorphics aren’t the only promising avenue. Machine learning and generative AI surrogate models are also gaining traction in accelerating conventional high-performance computing (HPC) problems.

“It remains an open question whether neuromorphic hardware can outperform GPUs on deep neural networks, which have largely evolved to benefit from GPUs’ single instruction, multiple data architecture,” the researchers wrote.

You may also like

Leave a Comment