Nvidia may have done it first, but that does not mean India will stay behind. That is why Bengaluru-based startup SandLogic has immersed itself in developing an AI-based co-processor IP, ExSLerate.
The name SandLogic comes from the two basic building blocks of a semiconductor processor chip—silicon and intellect. The popularisation of artificial intelligence has led to a dramatic transformation in computing, with an insatiable hunger for high-performance computing power, at the cost of power consumption and infrastructure requirements.

Traditional AI processing requires high-end GPUs that demand energy, unsuitable for edge applications. SandLogic started deploying AI models on compact devices, reducing reliance on cloud computing or energy-intensive processors.
Founded in 2018 by Kamalakar Devaki, Jesudas Fernandes, Radhika Kanigiri, and Ravi Kumar Rayana, Bengaluru-based SandLogic developed ExSLerate, a low-power AI co-processor IP designed to bring efficient AI computation to consumer devices, medical devices, drones, IoT, and other edge applications.
Unlike traditional processors, which act as external accelerators requiring continuous interaction with the host processor, ExSLerate boasts of minimising memory transactions by 90%. “This results in significantly lower power consumption while boosting efficiency. We have filed a patent for this innovation,” says Kamal.
“This is particularly useful in edge computing applications, where power efficiency and real-time AI decision-making are critical. Our proprietary software stack, EdgeMatrix, which sits on existing processors and upcoming chips packing ExSLerate IP, improves the co-processor’s performance by optimising token generation for AI models,” he elaborates.
ExSLerate can be clustered for larger AI workloads without requiring extensive redesign. The co-processor is compatible with multiple neural network architectures, including convolutional neural networks (CNNs) and transformers. The startup has integrated its edge AI platform, EdgeMatrix.io, into the design to optimise AI workflows on the chip itself. It packs all the required software tools to push AI workloads to the chip, including compilers, parsers, and various run-time optimisers.
Oops! This is an EFY++ article, which means it's our Premium Content. You need to be a Registered User of our website to read its complete content.
Good News: You can register to our website for FREE! CLICK HERE to register now.
Already a registered member? If YES, then simply login to you account below. (TIP: Use 'forgot password' feature and reset and save your new password in your browser, if you forgot the last one!)