The Big Chip, developed by the C, is a RISC-V processor system with 256 cores, exploring new possibilities in processor performance.
RISC-V processors have emerged as a pivotal innovation in the rapidly evolving computing technology landscape. These processors are distinguished by their reduced instruction set computing (RISC) architecture, renowned for its efficiency and versatility. The design of RISC-V processors is streamlined, focusing on executing a smaller number of computer instructions, allowing them to operate at a higher speed. This makes them particularly effective in a variety of computing environments.
The open-source nature of RISC-V is one of its most significant advantages, offering unprecedented levels of customization and adaptability. This aspect is crucial in many applications, from embedded systems to high-performance computing. Unlike proprietary processor architectures, RISC-V allows developers to tailor the technology to specific needs, facilitating innovation and efficiency. As the complexity of computing challenges continues to increase, the flexibility and efficiency of RISC-V processors make them instrumental in driving technological advancements across various industries.
The China Academy of Sciences researchers have published a report on the “Big Chip,” a chipset-based architecture, to investigate the challenges and possibilities in scaling up processor performance. The research team have developed a RISC-V processor system named Zheijiang Big Chip, consisting of 16 chiplets with 256 cores, using a 22nm CMOS process. Each chipset features 16 RISC-V processors interconnected through a network-on-chip (NOC), allowing symmetrical communication between chipsets. The architecture of the Big Chip is designed to scale up to 100 chiplets, as reported in the Elsevier journal Fundamental Research.
The system uses a die-to-die interface with a time-multiplexing technique for connecting multiple chiplets, supporting a unified memory system. This allows any core on any chiplet to access memory system-wide. The time-multiplexing approach helps reduce the area needed for I/O bumps and simplifies interposer wiring. The team compared their work to Cerebras’ wafer-scale 2D chips WS1 and WS2 and chipset-based processors from AMD and Nvidia. They conclude that future developments should focus on near-memory computing and optical-electronic chiplet communications as critical research areas.