Tuesday, April 23, 2024

Revolutionising AI With First-Of-Its-Kind Server-On-Chip

- Advertisement -

A 7nm AI chip, marking a new era in AI server-on-a-chip design, offering timely, optimised, and cost-efficient transformation of the AI infrastructure landscape.


NeuReality’s AI-focused NR1 chip, developed with a 7nm design, has transitioned its finalised design to Taiwan Semiconductor Manufacturing Company (TSMC) manufacturing, marking the inception of the globe’s premier Artificial Intelligence (AI)-focused server-on-a-chip (SOC). Given the widespread adoption of AI as a service (AIaaS) and the increasing demand from resource-intensive applications like ChatGPT, NeuReality’s offering is timely for an industry fervently seeking economic pathways to updated AI inference frameworks. During tests on AI-driven server configurations, the NR1 chip showcased a performance tenfold its counterparts at an equivalent cost, positioning NeuReality’s innovation at the forefront of cost-friendly, optimised AI inference solutions.

The NR1 chip represents the world’s first Network Addressable Processing Unit (NAPU), positioning it as a modern solution to the older CPU-focused method for AI inference. Multiple NR1 chips function collaboratively to bypass system constraints effortlessly. Every NR1 chip integrates a diverse range of computing capacities. One of them is a Peripheral Component Interconnect Express (PCIe) interface suitable for any Deep Learning Accelerator (DLA), an inbuilt Network Interface Controller (NIC), and an AI hypervisor. This hardware-driven sequencer orchestrates compute engines and transfers data structures between them. 

- Advertisement -

“For Inference-specific deep learning accelerators (DLA) to perform at full capacity, free of existing system bottlenecks and high overheads, our solution stack, coupled with any DLA technology, enables AI service requests to be processed faster and more efficiently. Function for function, hardware runs faster, and parallelism is much more than software. As an industry, we’ve proven this model, offloading the deep learning processing function from CPUs to DLAs such as GPU or ASIC solutions. As in Amdahl’s law, it is time to shift the acceleration focus to the system’s other functions to optimise the whole AI inference processing. NR1 offers an unprecedented competitive alternative to today’s general-purpose server solutions, setting a new standard for the direction our industry must take to support the AI Digital Age fully,” said Moshe Tanach, Co-Founder and CEO of NeuReality.

For more information, click here.

Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a journalist at EFY. She is an Electronics and Communication Engineer with over five years of academic experience. Her expertise lies in working with development boards and IoT cloud. She enjoys writing as it enables her to share her knowledge and insights related to electronics, with like-minded techies.


Unique DIY Projects

Electronics News

Truly Innovative Tech

MOst Popular Videos

Electronics Components