As sensor data overwhelms the cloud, Innatera’s neuromorphic chips bring always-on, ultra-low-power AI directly to the edge. But how? Sumeet Kumar from Innatera explains everything about their chip ‘Pulsar’ to EFY’s Ashwini Kumar Sinha, Nidhi Agarwal, and Saba Aafreen.

Q. Is Innatera a startup?
A. Yes, Innatera is a startup with a team of around 110 people spread across 15 countries. The team brings experience from leading semiconductor companies and successful startups, including contributors to widely used chip technologies. Our board includes Prof. Alberto Sangiovanni-Vincentelli, co-founder of Cadence. We have been highly capital-efficient, reaching production with €25 million raised across our Seed and Series A rounds. We are currently closing a €30 million Series B round, which continues to see strong investor interest, and we plan to move towards a Series C with established investors in Europe and other regions.
Our technology has global relevance. While core development is based in Europe, we collaborate with teams worldwide on applications and automation. This includes work with the Indian Institute of Technology (IIT) Delhi and a Pune-based company that developed a smart smoke detector using our chip. We do not sell solutions; if the application works, our chip works. We also actively engage with the ecosystem through conferences, workshops, and major very large scale integration (VLSI) and neuromorphic computing forums.
Q. What does the company do?
A. We founded Innatera in 2018 as a spin-off from Delft University of Technology with a clear goal of bringing brain-like intelligence directly to sensors. Sensors today generate massive amounts of data, far more than can realistically be sent to the cloud. Our solution is to process the world’s sensor data directly at the source. Removing the constant need for cloud connectivity, we bring brain-like intelligence directly to sensors for ultra-low-power AI processing. These neuromorphic chips are fast and always on.
Q. Do you also have a presence in India?
A. We have long had a strong presence in India. In recent years, we have focused on expanding our Indian customer base and partnering with local solution developers and research institutes working on next-generation neuromorphic technologies. We also showcase innovations from our Indian partners to our global customers. India is strategically important to us, and we will continue to invest in building a strong ecosystem for our technology in the country.
Q. How are neuromorphic principles implemented at the hardware level?
A. In our ‘Pulsar’ chip, neuromorphic principles are implemented by mimicking the brain’s structure in hardware and software. The chip uses processing elements that function as silicon neurons and synapses, recreating integrate-and-fire spiking behaviour with fine temporal resolution. This enables the full-fidelity implementation of spiking neural networks (SNNs) on silicon while remaining robust, energy-efficient, and manufacturable.
Q. What are the main computational elements in the neuromorphic microcontrollers?
A. Pulsar integrates three compute fabrics. First, a spiking neural network accelerator built from silicon neurons and synapses operating in parallel at extremely low energy. Second, a conventional convolutional neural network (CNN) accelerator for running traditional deep-learning models. Third, a reduced instruction set computer version V (RISC-V) central processing unit (CPU) subsystem with standard sensor interfaces for data acquisition, control logic and actuation. Together, these form a complete system-on-chip optimised for sensing applications.
Q. What is SNN processing, and why is it important in neuromorphic systems?
A. In conventional neural networks, data is processed as continuous digital values, and every node in the network consumes energy regardless of whether the computation is meaningful. Spiking neural networks are fundamentally different. SNNs are event-driven. Sensor data is encoded as simple voltage spikes, essentially single-bit events, where information is carried in the timing or frequency of the spikes. Computation occurs only when something meaningful happens. These networks inherently understand time, enabling them to be much smaller and far more energy-efficient than traditional neural networks. This temporal, event-driven nature closely mirrors how the biological brain processes information, making SNNs particularly powerful for edge- and sensor-based applications.
Q. How do you train the SNNs?
A. Training SNNs is similar to training conventional neural networks. We developed a framework called ‘Talamo’, which uses PyTorch as the front end. Developers train models using standard workflows, and our compiler automatically maps them to hardware. Developers do not need to understand the chip’s internal architecture.
Q. Achieving accuracy in SNNs is difficult. How do you address this?
A. SNN training has come a long way since the early days, and accuracy is in no way a limitation. Using established training techniques and high-quality datasets, our SNN models achieve accuracy comparable to state-of-the-art CNNs. In applications such as audio classification and keyword spotting, our models meet or exceed existing industry benchmarks.
Q. Why do developers still prefer CNNs for image and video processing?
A. Traditional image and video data are frame-based and do not inherently contain temporal information. Spiking neural networks are particularly effective for event-driven and time series data. While event-based vision sensors can benefit from SNNs, conventional imaging pipelines are generally better suited to convolutional neural networks. That said, even in imaging, SNNs offer significant advantages over traditional computer vision, particularly for event-based imaging.
Q. So your chip has both CNN and SNN. Why keep both on the same chip?
A. Real-world applications are rarely solved using a single type of neural network. Different stages of a sensing pipeline may require different processing techniques. Similarly, application developers may want to run different models at different times. For instance, in a video doorbell, detect a human in radar data using an SNN, then trigger the camera to capture an image processed by a CNN to determine whether the human left a package. By integrating both accelerators on the same chip, we give developers the flexibility to choose the best approach for each stage of their application without compromising power efficiency or performance.
Q. How does the chip decide which workload runs on CNN or SNN?
A. The chip itself does not make that decision. The developer explicitly defines where each model runs. Spiking neural network models are mapped to the spiking accelerator, convolutional neural network models are mapped to the CNN accelerator, and control logic is handled by the RISC-V CPU. This explicit mapping allows developers to maintain full control over performance, accuracy, and power tradeoffs, rather than relying on automated scheduling decisions.
Q. So it can be optional to use one or both?
A. Absolutely. Developers can choose to use only the spiking neural network, only the convolutional neural network, or a combination of both. The choice depends entirely on the application requirements and how the sensing and processing pipeline is designed.
Q. What hardware architecture is used to combine CNN and SNN?
A. The spiking neural network accelerator uses a near-memory architecture, where memory is embedded directly within the compute fabric. This approach reduces data movement and improves efficiency. The CNN accelerator has its own dedicated memory space optimised for its workload type. All components are connected through data exchange among the accelerators, the CPU, and other processing blocks on the chip.
Q. Is data flow between CNN and SNN handled by software?
A. The chip integrates on-chip direct memory access (DMA) engines as well as shared memory spaces that are addressable by the RISC-V, as well as the on-chip accelerators. Data flow between these components can be configured in software and carried out efficiently in hardware.
Q. What challenges did you face with timing synchronisation or clock domains?
A. Given how the chip was architected and developed, we did not encounter fundamental challenges related to timing synchronisation or clock domains. The chip ingests data at line rate from sensors, and is not an extremely high-speed design. Naturally, this also reduced system design complexity and simplified synchronisation.
Q. Does the chip automatically select CNN or SNN for smart device scenarios?
A. No. All decisions regarding whether to use CNNs or SNNs are made by the developer. While CNNs are commonly used for image-based processing and SNNs for temporal or event-driven data, accuracy ultimately depends on model design, training, and data quality rather than on the choice of network type alone.
Q. What makes the Pulsar chip so energy efficient?
A. Pulsar was designed as a complete system, with energy efficiency considered across the entire architecture rather than in isolated blocks. Data movement, computation, and memory access were all optimised together. While the chip performs inference very efficiently, the optimised system-level design keeps overall system power consumption low even when inference is performed at high accuracy. In addition, the chip supports multiple power states, including active mode, clock-gated operation, light sleep, and deep sleep. This allows the system to adapt dynamically to different operating conditions, reducing power consumption when full performance is not required.
Q. Can you explain power efficiency modes and comparisons?
A. Pulsar supports several operating modes, including active mode, clock-gated operation, light sleep, and deep sleep. At a system level, comparisons show that power consumption is significantly lower than that of conventional microcontrollers and AI-enabled microcontroller units.
Q. How did you test and verify that both networks were working together on one chip?
A. Our chips are already in production and are being used by customers in real-world applications. Each compute fabric has been validated independently, and the combined operation of CNN and SNN has been validated through deployed systems rather than only through internal testing.
Q. Did you encounter any latency bottlenecks in hybrid computation?
A. No. In fact, Pulsar can offer low-latency processing at extremely low power, without sacrificing either. Low latency is therefore an advantage of our solution.
Q. Are there limitations in the current design that will be improved later?
A. Neuromorphic computing has much to offer to the world of sensors, and with Pulsar, we have opened the door to innovative processing techniques. As we go down this road, we see opportunities to add more powerful features to our chips, inspired, of course, by the biological brain. With Pulsar being integrated into real products, we are continuously working with customers to identify and solve even more challenging problems. In bringing Pulsar to market, we have learned a great deal about how customers use the chip’s processing capabilities and what they value most. Future iterations will focus on evolution rather than redesign.
Q. How does Pulsar handle noise and variability in spiking networks?
A. Noise and variability are handled through careful system design, as well as by factoring them into training. Neural networks are designed and trained to be invariant to expected sources of noise and runtime irregularities. As a result, when the network is deployed, it is already robust to these variations, and the impact of noise or other operational fluctuations is largely irrelevant.
Q. From a developer perspective, how is Pulsar programmed?
A. Developers use Python/PyTorch for developing AI models and standard C for programming the RISC-V CPU. Programs can also be written entirely in Python. There is no requirement to use proprietary languages or specialised development tools.
Q. What are the main applications where combining CNN and SNN is required?
A. We see strong demand across consumer electronics, Internet of Things (IoT), smart home devices, and wearable products. Typical applications include audio classification, keyword spotting, radar-based human presence detection, image-based recognition, and biomedical signal analysis such as ECG monitoring. In many of these cases, the spiking neural network handles the always-on, low-power front-end processing and classification. In some cases, the SNN is used in conjunction with a CNN.
Q. But these applications are possible without combining CNN and SNN. Why combine them?
A. SNNs and CNNs are strong in their own right, and it is possible to build applications entirely with one style of neural network. However, each is a tool with different strengths. In the simplest case, we have seen CNNs used for spatial recognition tasks, with the output fed into an SNN for temporal recognition. This is a strong example, but there are other scenarios where one model is used to reduce the amount of data entering the other, thereby reducing power dissipation. The exact advantage depends on the specific application use case, but hybrid approaches enable more powerful processing capabilities than a single network type can achieve.
Q. Which industries are likely to adopt this technology first?
A. I think the earliest adopters are in the consumer electronics, IoT and smart home spaces. We have fantastic customers deploying Pulsar in forward-looking use cases, and all of these show that smart devices with localised intelligence at the edge are here. Consistently, we have had industrial and automotive vendors interested in us, but the earliest and fastest movers are consumer electronics and IoT/smart home.
Q. Is Innatera planning to work on new technologies, and what does success look like?
A. As I said before, Pulsar is the first neuromorphic microcontroller, yet it only scratches the surface of what biological brains are capable of doing. There are exciting possibilities ahead – our future products will focus on enabling higher levels of autonomy, adaptability, and efficiency at the edge. For us, success means enabling intelligent systems that operate reliably on smaller batteries, at lower cost, and with greater functional integration.it only scratches the surface of what biological brains are capable of doing. There are exciting possibilities ahead – our future products will focus on enabling higher levels of autonomy, adaptability, and efficiency at the edge. For us, success means enabling intelligent systems that operate reliably on smaller batteries, at lower cost, and with greater functional integration.





