Friday, December 5, 2025

New Chip Help AI Chips Move Data Faster

By channelling data in multiple lanes between the memory and processing unit, data can flow faster, preventing a data bottleneck.

CREDO Weaver’s Memory Fanout Gearbox

AI systems are processing larger and more complex models, but their performance is now limited by how fast and how much memory they can access. Most AI chips can compute faster than data can move in and out of memory. Existing memory options such as LPDDR5X or GDDR face limits in speed and capacity, while High Bandwidth Memory (HBM) is expensive and difficult to scale. This creates a memory bottleneck that slows AI inference tasks.

- Advertisement -

Credo Technology Group has introduced Weaver, a new type of memory fanout gearbox that aims to fix this problem. It increases how much memory data can move between an AI processor and memory modules, improving both speed and efficiency. Weaver is the first product in Credo’s OmniConnect series, which focuses on scaling AI hardware for data centres.

A fanout gearbox acts as a high-speed bridge between AI processors and multiple memory channels. In simple terms, it takes a limited number of memory connections from the processor and “fans them out” into many more, allowing more data to move simultaneously. This is similar to expanding a single road into multiple lanes to reduce congestion. By doing so, it increases the total bandwidth available to the processor without changing the core memory type or architecture.

With Credo’s 112G VSR SerDes technology, think of it as a high-speed “lane multiplier”. Each lane moves data at 112 gigabits per second. The fanout gearbox uses multiple such lanes to expand bandwidth, linking one processor to many memory devices. This is what allows Weaver to handle up to 16TB/s bandwidth using standard LPDDR5X memory, essentially breaking past the data traffic limit that slows AI inference systems.

- Advertisement -

The technology also supports flexible memory packaging, allowing system designers to adjust configurations based on the needs of different AI models. It includes built-in telemetry and diagnostic tools to monitor performance and ensure reliable operation over time.

Janarthana Krishna Venkatesan
Janarthana Krishna Venkatesan
As a tech journalist at EFY, Janarthana Krishna Venkatesan explores the science, strategy, and stories driving the electronics and semiconductor sectors.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics

×