Built with a custom AI-first architecture promises 100x gains in power efficiency and performance over standard MCUs, enabling always-on voice, vision, and sensing tasks on tiny battery-powered devices.

A new AI-native processor is set to reshape edge computing by delivering up to 100x improvements in power efficiency and performance compared to traditional 32-bit microcontrollers (MCUs). Designed specifically for battery-powered devices, the chip enables always-on AI inference for applications such as voice recognition, face authentication, intelligent sensing, and low-power vision systems.
At the heart of the processor by Ambient Scientific, is an architecture purpose-built for AI workloads. Unlike conventional MCUs and NPUs that struggle with the overhead of general-purpose instruction sets, this design directly maps neural network matrix operations into in-memory analog compute blocks, eliminating wasted cycles. The result is high-speed, low-energy execution of convolutional (CNN), recurrent (RNN), and other neural models at the edge.
The key features are:
- Integrates 10 programmable AI cores across two power domains
- Always-on block consumes <100 µW for tasks like keyword spotting
- Optimized for ultra-low-power sensor fusion
- Total peak AI throughput: 512 GOPs
- Supports up to 2,560 MAC operations per cycle
Supporting this compute engine is 2MB of on-chip SRAM, ten times more than its predecessor, enabling more complex models to run locally. For traditional control tasks, an Arm Cortex-M4F CPU is included. The chip also packs in an ultra-low-power ADC, enhanced I²S logic, and interfaces for up to 28 sensors (eight analog and 20 digital), making it a full-fledged system-on-chip (SoC).
Developers gain flexibility through the Nebula AI toolchain, which supports TensorFlow, Keras, and ONNX for training and deployment. The cores are fully programmable, ensuring adaptability to evolving AI model types. A hardware layer called SenseMesh further enhances performance by enabling low-latency sensor fusion and reducing CPU polling overhead.
Demonstrations have shown the chip powering fall detection, face ID, and voice recognition on devices running only on coin cell batteries—an achievement that highlights its efficiency. Sampling has begun, with volume production expected in early 2026. If successful, this processor could accelerate the shift toward smarter, self-sufficient edge devices, reducing dependency on cloud inference while extending battery life across wearables, IoT, and industrial sensors.
For more information, click here.








