What if AI could deliver the accuracy of digital computing while using only a fraction of the power? How close are we to a world where intelligence is seamlessly embedded in everything from wearables to industrial systems without compromise? Ambient Scientific’s
GP Singh reveals the story behind breakthrough mixed-signal AI processors and the bold vision to redefine the future of computing in conversation with Nidhi and Vidushi from EFY.

Q. What does your company do?
A. We are a fabless semiconductor company. Production is handled by contract manufacturers. We focus on advanced AI processors, ranging from simple components such as resistors to complex microprocessors that run hundreds of applications, including Intel x86, ARM chips in smartphones, and GPUs.
The next era of computing is AI and deep learning using neural networks. Few companies design microprocessor-level AI chips, and we are one of them. Our core technology, Digan or MX8 Engines, can be scaled like GPUs. Our first product, GPX 10, has 10 AI cores, and GPX 10 Pro also has 10 cores but with more memory and broader capabilities. These chips make it possible to run AI on battery-powered devices, which was previously impractical due to high power requirements.
Q. What was the core idea behind creating an AI-native computer architecture?
A. Over the past four to five decades, new types of computing have led to new processors. Mainframes relied on IBM’s i390, desktops introduced x86 and Intel, mobile devices adopted ARM, and graphics-intensive workloads required GPUs. Today, AI and deep learning are driving the next shift, requiring processors designed specifically for these workloads.
We recognised this early and began designing AI microprocessors from the ground up. Power consumption was a major challenge, so we developed a low-power AI compute core. This approach improves AI efficiency while directly addressing industry-wide power constraints.
Q. How innovative is your product, if you had to summarise it in a few words?
A. Analogue AI is a natural fit for AI workloads because approximately ninety-five per cent of AI computing involves matrix operations, known as MAC computing. Analogue architectures handle these operations exceptionally well, while digital computing manages the remaining tasks. Historically, errors and reliability issues have limited the adoption of analogue AI, but our innovation overcomes these challenges, positioning us to lead in commercial AI computing for the next two decades.
By combining digital and analogue architectures, we enable developers to use existing software tools without sacrificing accuracy. This seemingly small change creates a paradigm shift and defines a clear roadmap for the next two decades.
Q. Can you explain the difference between the AI core and other cores such as CPUs, GPUs, and traditional microprocessors?
A. Traditional computing with CPUs, GPUs, or microprocessors relies on binary decision trees, where the system makes one decision at a time, with each step dependent on the previous one. In contrast, AI computing is based on pattern matching rather than sequential calculations, using a data-flow architecture instead of a control-flow architecture.
While traditional computers are optimised for a small number of threads, AI cores are designed for hyperscale computing and can handle thousands or even millions of threads simultaneously.
Q. What are the design challenges in making an AI core processor that can operate on battery power?
A. With traditional architectures, such as GPUs or x86 processors, running thousands of computations in parallel requires thousands of cores or multiple chips. This is highly inefficient because these systems were not designed for large-scale parallelism. The primary challenge is power consumption. Running thousands of computations on traditional digital systems can drain a battery in seconds or minutes. To create a device capable of operating for weeks or months, we required an architecture that could efficiently handle massive parallel computing.
Our solution is analogue AI computing, where core operations such as multiply-and-accumulate are performed using analogue circuits. Analogue AI is highly efficient but introduces challenges, including reliability issues and sensitivity to variations. To address this, we developed a hybrid architecture combining the strengths of digital and analogue computing. We call this the DigAn architecture, derived from digital and analogue. It retains over 80 per cent of the efficiency of analogue computing while eliminating unreliable behaviour.
Q. Are there any trade-offs with accuracy when designing such a low-power AI core?
A. In a purely analogue AI architecture, as adopted by some companies since around 2012 or 2013, there is typically a trade-off between power efficiency and accuracy. Some organisations accept reduced accuracy to achieve power savings. In our case, however, we chose not to compromise on accuracy.
We modified the analogue design and made selective trade-offs in power efficiency to maintain the same level of accuracy as a digital computer. As a result, our DigAn architecture computes accurately without the errors or uncertainty inherent in pure analogue AI systems.
Q. What are the advantages of integrating analogue compute blocks for AI workloads?
A. The primary advantage is efficiency. A digital AI implementation typically requires 300 to 500 transistors for a given operation, whereas an analogue approach can perform the same operation using only a few transistors. In our design, we slightly increased the transistor count compared to a pure analogue implementation, resulting in a 20-30 per cent increase in area and power.
Despite this increase, the power savings from analogue computation remain one to two orders of magnitude higher than those of a fully digital design.
Q. You mentioned a 20 to 30 per cent increase in area. Is this compared to classical analogue designs?
A. Classical analogue designs are at least two orders of magnitude more efficient than digital designs in terms of area and power because they trade accuracy for efficiency, accepting some level of error. In our case, we prioritised accuracy and chose not to compromise, which makes our solution approximately 20-30 per cent less efficient than a classical analogue design.
Q. What challenges arise when mapping digital neural network models onto analogue compute elements?
A. If computational errors were present, this would create significant challenges. However, we have developed a block that accepts digital input, performs analogue computation, and produces digital output. The conversion from digital to analogue and back is fully accurate, so from an application developer’s perspective, there is no difference between traditional digital computing and our DigAn architecture.
This is the core of our innovation. We provide the same flexibility, accuracy, and software paradigm and toolchain that developers already use, while delivering extremely low power consumption and high performance.
Q How does ADC design impact system performance in a mixed-signal AI processor?
A. ADC design is critical because it directly affects the accuracy of signal conversion between analogue and digital domains. Our team has introduced significant innovations by converting certain portions of the ADC into partially digital elements, which helps eliminate errors that can arise during analogue computation. This forms part of the deep technology we have developed and do not disclose publicly.
Components such as ADCs and DACs require careful design to ensure accurate conversion between classical analogue signals and our DigAn architecture. Proper design of these components is essential to maintain the performance and accuracy of our AI processors.
Q. Can you briefly explain the role of memory in your AI processors?
A. In the embedded world, small devices function as complete computers, storing all data, programs, and AI models on-device rather than relying on external memory or storage. When an application requires a larger AI model, it must fit entirely into on-chip memory.
Our first version offered 320 kilobytes of on-chip memory, which developers quickly found limiting. The new version expands this to 2 megabytes, enabling larger applications, more complex operating systems, additional peripherals and sensors, and more sophisticated software stacks.
Q. What are the benefits of hardware sensor fusion compared to software-based sensor fusion on a CPU?
A. We designed this chip for devices such as safety pendants that connect multiple sensors, including sound, motion, pressure, temperature, and vibration, allowing AI to process complex tasks efficiently. Running this workload on a CPU would rapidly drain the battery, so we developed a hardware-based Sense Mesh that fuses sensor data with minimal power consumption.
The chip can operate on a small battery for up to six months at 100 microwatts, whereas traditional software-based approaches can consume 10x more power. The Sense Mesh feeds data directly to always-active AI cores that continue processing even in subconscious mode, supporting up to 20 digital and eight analogue sensors. This architecture enables practical, always-on AI in battery-powered devices.
Q. How do you design an efficient data pipeline so that multiple sensor inputs do not become a bottleneck?
A. The answer is sustained engineering effort and creative problem-solving. Our AI architecture operates at just 100 microwatts, but existing digital microphones and ADCs were too power-hungry.
We therefore designed our own 5-microwatt ADC, once considered impractical, which now allows the Sense Mesh to process multiple sensor inputs efficiently without bottlenecks.
Q. Can you discuss the new innovations you are currently working on?
A. Our scalable technology supports devices ranging from 10-core ultra-low-power implementations to a 64-core launch planned for 2026-27, delivering high compute performance with minimal energy consumption. We are also developing devices with thousands of cores, far exceeding current efficiency benchmarks.
The primary challenge remains the long development cycle before revenue generation, which many investors find difficult to support.
Q. Do you think transistor scaling limitations will impact the future of AI-native processors?
A. Over the past 20 years, innovation in materials, transistors, and EDA tools has slowed. While smaller process nodes have improved density, fundamental progress has stalled, creating bottlenecks in multi-layer AI, memory, and power delivery.
Emerging technologies such as graphene and quantum computing remain distant, making solutions like ours essential for the next 15-20 years.





