The development of more energy-efficient and scalable technology has become nothing less than a necessity now. But traditional computers are unable to compute in an energy-efficient manner, a restriction that has prompted a quest for new computing technologies.
Probabilistic computers (p-computers), according to Kerem Camsari, Assistant Professor of Electrical and Computer Engineering at UC Santa Barbara, are the answer. Probabilistic bits (p-bits) power P-computers, which interact with other p-bits in the same system. Unlike bits in traditional computers, which are either 0 or 1, or qubits, which can be in several states at once, p-bits oscillate between positions and work at normal temperature.
“We showed that inherently probabilistic computers, built out of p-bits, can outperform state-of-the-art software that has been in development for decades,” said Camsari. Hisi team worked with researchers from the University of Messina in Italy, as well as Luke Theogarajan, vice chair of UCSB’s ECE Department, and physics professor John Martinis. Using traditional hardware to develop domain-specific designs, the researchers were able to achieve their promising results. They created a one-of-a-kind sparse Ising machine (sIm), a revolutionary computer device that solves optimization problems while consuming the least amount of energy.
The sIm, according to Camsari, is a collection of probabilistic bits that can be compared to persons. And each person only has a tiny group of trustworthy pals, who are the machine’s “sparse” relationships. “The people can make decisions quickly because they each have a small set of trusted friends and they do not have to hear from everyone in an entire network,” he explained. “The process by which these agents reach consensus is similar to that used to solve a hard optimization problem that satisfies many different constraints. Sparse Ising machines allow us to formulate and solve a wide variety of such optimization problems using the same hardware.”
A field-programmable gate array (FPGA) was used in the team’s prototyped design. The researchers demonstrated that their sparse design in FPGAs was up to six orders of magnitude quicker, with sampling speeds that were five to eighteen times faster than those attained by optimised algorithms on traditional computers. Furthermore, they claimed that their sIm achieves tremendous parallelism, with the number of p-bits scaling linearly with the number of flips per second — the essential figure that determines how rapidly a p-computer can make an educated decision.
Camsari returns to the analogy of a group of close friends debating a decision. “The key issue is that the process of reaching a consensus requires strong communication among people who continually talk with one another based on their latest thinking,” he noted. “If everyone makes decisions without listening, a consensus cannot be reached and the optimization problem is not solved.”
To put it another way, the faster the p-bits communicate, the faster they can establish a consensus, which is why increasing the flips per second while ensuring that everyone listens to each other is critical. “This is exactly what we achieved in our design,” he explained. “By ensuring that everyone listens to each other and limiting the number of ‘people’ who could be friends with each other, we parallelized the decision-making process.”
“To us, these results were the tip of the iceberg,” he said. “We used existing transistor technology to emulate our probabilistic architectures, but if nanodevices with much higher levels of integration are used to build p-computers, the advantages would be enormous. This is what is making me lose sleep.”
Click here to access their study.