HomeSpecialSuper Fast Network Card For Many GPUs

Super Fast Network Card For Many GPUs

GPU clusters are slowing down with network problems and stalled jobs. A new design could solve it all and make large AI clusters run smoothly.

In this modern era of Artificial Intelligence (AI) we have seen clusters of large General Purpose Units (GPUs). Now for building these large clusters AI teams face many challenges like networking sprawl. Servers use separate Network Interface Cards (NICs), Peripheral Component Interconnect Express (PCIe) switches, and rail switches, all connected in narrow, isolated paths that do not work well together. This setup limits bandwidth, creates cross-GPU congestion, and makes the system fragile. If one GPU link fails it can halt the entire job. As clusters scale, device hops multiply, load distribution becomes uneven, incast events rise, and total cost of ownership increases. Expensive GPUs often sit under-used because the network cannot keep up.

- Advertisement -

Enfabrica’s ACF-S steps in to solve this bottleneck. It replaces multiple components with a single 3.2 Tbps Multi-GPU SuperNIC that gives GPUs access to 8× elastic bandwidth. Instead of routing data through several devices, ACF-S moves traffic directly and distributes it evenly across GPUs. This reduces data-movement latency, cuts device hops by up to 66%, and keeps network-to-GPU traffic congestion-free. For users running large training jobs, this means jobs stop failing due to link flaps and clusters stay productive even under heavy load.

The technology works for data centers scaling multi-GPU nodes as well as teams trying to control cost while growing their AI footprint. By collapsing NICs, PCIe switches, and other components into one architecture, ACF-S reduces CapEx by up to 29% and OpEx by as much as 55%. It also aligns with upcoming standards, supporting PCIe Gen5 and CXL 2.0+ today, with PCIe Gen6 and CXL 3.0 on the way.

ACF-S sits at the center of Enfabrica’s EMFASYS platform, which unifies compute, memory, and interconnect paths for AI workloads. For operators building dense GPU clusters and struggling with network limitations, the design offers a practical way to scale performance without multiplying complexity.

Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a Senior Technology Journalist at Electronics For You, specialising in embedded systems, development boards, and IoT cloud solutions. With a Master’s degree in Signal Processing, she combines strong technical knowledge with hands-on industry experience to deliver clear, insightful, and application-focused content. Nidhi began her career in engineering roles, working as a Product Engineer at Makerdemy, where she gained practical exposure to IoT systems, development platforms, and real-world implementation challenges. She has also worked as an IoT intern and robotics developer, building a solid foundation in hardware-software integration and emerging technologies. Before transitioning fully into technology journalism, she spent several years in academia as an Assistant Professor and Lecturer, teaching electronics and related subjects. This background reflects in her writing, which is structured, easy to understand, and highly educational for both students and professionals. At Electronics For You, Nidhi covers a wide range of topics including embedded development, cloud-connected devices, and next-generation electronics platforms. Her work focuses on simplifying complex technologies while maintaining technical accuracy, helping engineers, developers, and learners stay updated in a rapidly evolving ecosystem.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics