A small module now brings big AI power to edge devices, running LLMs and vision tasks with low energy use and added security. Want to know more?

Running demanding AI models at the edge often requires large, power-hungry PCIe cards, which are not always suitable for compact or energy-limited deployments. To address this, Axelera AI has introduced Metis M.2 Max, a processor that delivers PCIe-class performance in the compact M.2 form factor. It is designed for compute-intensive inference tasks such as large language models, vision transformers, and generative AI while keeping power use at an average of 6.5W.
The processor offers a 33% boost for convolutional neural networks and doubles token-per-second rates for LLMs and vision-language models compared to the earlier Metis M.2. This makes it useful in industries where edge AI must handle real-time, high-volume workloads—such as industrial automation, retail, healthcare, public safety, and security systems.
To fit these varied environments, Metis M.2 Max supports up to 16 GB of memory, doubles memory bandwidth, and comes in both standard and extended temperature versions. A slimmer design, 27% shorter in card height, improves system integration options, while a low-profile heatsink and onboard power probe allow tuning for space- or thermally-constrained deployments.
Security is built in through firmware integrity checks, a Root of Trust, and secure boot and upgrade processes, ensuring only authenticated Axelera AI firmware can run in deployed systems. This addresses the unique security risks faced by edge devices.
Metis M.2 Max is available as a standalone 2280 M.2 M-key module or with an optional heatsink. Developers can access its full capabilities through Axelera AI’s Voyager SDK, which supports both standard and proprietary models for easier integration into AI projects.
Together with the original Metis M.2, the new version gives customers more flexibility to scale edge AI workloads across applications ranging from computer vision to LLMs.








