CAMBRIDGE, Mass., January 29, 2026 — Imagine a computer that doesn’t just *dissipate* heat, but actually *uses* it to process information. Researchers at MIT have designed silicon structures, smaller than a speck of dust, that do just that—performing calculations with excess heat instead of electricity, potentially paving the way for more energy-efficient computing.
Turning Waste into Work
Table of Contents
These tiny structures could revolutionize how we think about energy use in electronics, turning a byproduct into a powerful resource.
What is thermal analog computing? It’s a method where data is encoded as temperatures, and calculations are performed by controlling the flow and distribution of heat through a material. The result is then read as a power output.
The team successfully used these structures to perform matrix vector multiplication—a fundamental mathematical operation powering machine-learning models like large language models—with over 99% accuracy. The findings were published in the journal Physical Review Applied.
“Most of the time, when you are performing computations in an electronic device, heat is the waste product. You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself and showing that computing with heat is possible,” said Caio Silva, an undergraduate student in the Department of Physics and lead author of the research.
Inverse Design: Flipping the Script
This breakthrough wasn’t about tinkering with existing materials; it was about *designing* materials to behave in a specific way. The researchers leveraged a software system they previously developed, employing a technique called inverse design. Instead of starting with a material and figuring out what it can do, they defined the desired function—in this case, matrix multiplication—and let the software design the optimal geometry.
The system creates complex silicon structures, each about the size of a dust particle, filled with tiny pores. It iteratively adjusts the structure’s design until it achieves the desired mathematical function, with heat diffusing through the silicon to perform the calculations.
“These structures are far too complicated for us to come up with just through our own intuition. We need to teach a computer to design them for us. That is what makes inverse design a very powerful technique,” explained Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies and a member of the MIT-IBM Watson AI Lab.
Overcoming Challenges and Future Applications
One hurdle the team faced was the natural flow of heat—from hot to cold—which limited the structures’ ability to encode negative values. They solved this by splitting the calculation into positive and negative components, using separate structures for each and subtracting the outputs.
While scaling this technology for complex deep-learning models remains a significant challenge, the potential applications extend beyond pure computation. The structures could be used to detect heat sources and measure temperature changes in electronics without consuming additional energy, potentially eliminating the need for multiple temperature sensors.
“This information is critical. Temperature gradients can cause thermal expansion and damage a circuit or even cause an entire device to fail. If we have a localized heat source where we don’t want a heat source, it means we have a problem. We could directly detect such heat sources with these structures, and we can just plug them in without needing any digital components,” Romano added.
The researchers are now working on designing structures capable of sequential operations—where one structure’s output feeds into the next—and developing programmable structures that can encode different matrices without requiring a complete redesign.
