Two microcontroller families can run AI directly on the device. They reduce delay, lower power use and support motor control and other embedded system functions.

Texas Instruments has introduced two microcontroller families designed to run artificial intelligence on embedded devices. The devices, the MSPM0G5187 microcontroller and the AM13Ex microcontroller family, bring AI capabilities to electronic systems that rely on processors.
Both devices include the company’s TinyEngine neural processing unit NPU, a hardware accelerator built for microcontrollers. The NPU runs deep learning inference tasks on the device, reducing latency and lowering energy use for edge processing.
The MSPM0G5187 is built on an Arm Cortex M0+ core and integrates the TinyEngine NPU. The accelerator runs neural network calculations locally while the CPU handles application code. Compared with microcontrollers without a hardware accelerator, the NPU reduces the flash memory needed for AI workloads. It can also lower inference latency by up to 90 times and reduce energy use by more than 120 times for each AI inference.
The second release, the AM13Ex microcontroller family, is designed for motor control systems used in appliances, robotics and industrial equipment. In these systems, functions such as adaptive control and predictive maintenance rely on AI. Traditionally, these functions required multiple chips.
The AM13Ex devices combine an Arm Cortex M33 core, the TinyEngine NPU and control functions on a single chip. This integration can reduce bill of materials costs by up to 30% while allowing designers to run motor control and AI algorithms on the same device.
The microcontrollers can run control loops for up to four motors while the NPU executes algorithms for load sensing and energy optimisation. An integrated trigonometric math accelerator also performs calculations up to ten times faster than coordinate rotation digital computer CORDIC implementations, improving motor control response and precision.
“TI invented the digital signal processor almost 50 years ago, laying the groundwork for today’s edge AI processing,” said Amichai Ron, senior vice president, Embedded Processing and DLP Products at TI. “Now TI is leading the next phase of innovation by integrating the TinyEngine NPU across our entire microcontroller portfolio, including general-purpose and high-performance, real-time MCUs. By enabling AI across our software, tools, devices and ecosystem, we are making edge AI accessible and easy to use for every customer and every application.”





