They combine photonics and electronics to process information at ultra-high speeds, low latency and low power consumption
In nature, light helps illuminate dark places.
In electronics, light helps transfer data from one place to another at lightning speeds via optical fibre cables.
When employed in AI chips, light can be a powerful medium to compute highly complex algorithms, making it beneficial for addressing the evolving intricate advancements in AI technology.
This is what Lightelligence, a Boston-based company started by an MIT alumnus aims to provide. By combining photonics and electronics, the organisation builds optical chips that process information with light at ultra-high speeds, low latency and low power consumption for high-performance computing tasks.
The newest development called Photonic Arithmetic Computing Engine (PACE) claims to solve some of the hardest mathematical problems out there at 100x speed faster than a conventional computing processor.
At the heart of the system is an application-specific integrated circuit (ASIC) that comes with control logic for regulating data flow and I/O as well as SRAM for data storage. It is this ASIC that performs the intensive matrix multiplications to solve many AI computing problems.
Available in a small size, PACE incorporates several photonic circuits that transmit light signals. No presence of transistors means the heat generated due to their switching is absent. Rather than electricity, the optical chip utilises light energy to perform rapid AI computations.
“We precisely control how the photons interact with each other inside the chip,” says Yichen Shen, founder and CEO of Lightelligence. “It’s just light propagating through the chip, photons interfering with each other. The nature of the interference does the mathematics that we want it to do.”
Taking forward the achievement of the 2019-released Comet, a fully integrated optoelectronic AI accelerator that included nearly 100 photonic devices with a clock speed of 100 kilohertz, the now revealed PACE integrates more than 12,000 photonic devices at a clock speed of 1 gigahertz.
By leveraging the phenomena of light interference, the optical chip produces less heat and is immune to changes in its surroundings, thus consuming little power and operating under limited electric energy.
Solves Mathematical Problems With Ease
The Ising problem and the Max-Flow Min-Cut problem are considered to be among the challenging mathematical problems that often baffle scientists solving them. PACE efficiently searches for their solutions and presents them immediately, thus showcasing its ability to produce advanced results.
“These problems belong to an important class of intractable mathematical problems known as NP-complete, which have stumped mathematicians for the last 50 years,” said Yichen Shen. “Algorithms for NP-complete problems are important because they can be mapped to each other, and they have hundreds of practical applications in fields that include cryptography, power grid optimisation and advanced image analysis.”
PACE uses matrix-vector multiplication to achieve low latency to generate high-quality solutions. With this approach, PACE runs Ising problem algorithms 100x faster than a typical computing unit, such as NVIDIA RTX 3080, and 25x faster than other algorithms, such as the Simulated Bifurcation Machine.
According to Shen, compared to a GPU’s requirement for hundreds of clocks to complete a 64 x 64-matrix multiplication, PACE can do it in less than 5 – 10 nanoseconds.
Things needing more effort
For obtaining a high-quality result, tremendous circuit designing, simulation, iteration and test chips are required.
Large-scale chip fabrication is another challenge.
“We are taking two chips built on different fabrication processes and stacking them up directly with thousands of connections between them,” said Maurice Steinman, vice president of engineering at Lightelligence.
“One is powered by light, so we need to get a light source in there. The other needs an electric current to power it and heat removal. There are tremendous challenges we have to systematically attack to get all of that to come together,” he added.
Much of the intensive AI compute happens in the cloud at data centres that require large processing capabilities. With the new optical chips, the many servers employed will consume much less electricity and subsequently burn much less power.
With this new technology, the advancing computational challenges of the future can be quickly and efficiently tackled.
Autonomous vehicles are another feature that will heavily rely on AI for making quick decisions. Faster computational imaging leads to faster decision-making.
“Our chip completes these decision-making tasks at a fraction of the time of regular chips, which would enable the AI system within the car to make much quicker decisions and more precise decisions, enabling safer driving,” says Yichen Shen.
“We believe optics is going to be the next computing platform, at least for linear operations like AI,” adds Shen.
PACE accelerator chip will begin to be shipped in 2022.
For research paper, read here