The modules incorporate NVIDIA’s embedded GPUs that help accelerate compute intensive SWaP-constrained applications
GPUs are being increasingly used as they provide strong AI inferencing at the edge, considering size, weight and power (SWaP). This makes the need for high-performance, low-power GPU modules highly critical as AI at the edge becomes more prevalent.
Based on NVIDIA’s Turing architecture, here’s presenting ADLINK Technology’s newly embedded MXM-based graphics modules that accelerate edge AI inference in SWaP-constrained applications. They offer high-compute power to transform data at the edge into actionable intelligence and come in a standard format for systems integrators, ISVs and OEMs, increasing choice in both power and performance.
The embedded MXM graphics modules accelerate edge computing and edge AI in several compute-intensive applications, particularly in harsh or environmentally challenging applications such as those with limited or no ventilation, or corrosive environments. Examples include medical imaging, industrial automation, biometric access control, autonomous mobile robots, transportation and aerospace and defence.
The ADLINK embedded MXM graphics modules:
- Provide acceleration with NVIDIA CUDA, Tensor and RT Cores
- Are one-fifth the size of full-height, full-length PCI Express graphics cards
- Offer more than three times the lifecycle of non-embedded graphics
- Consume as low as 50 watts of power
“The new embedded MXM graphics modules provide the perfect balance between size, weight and power for edge applications, where the demand for more processing power continues to increase,” said Zane Tsai, director of platform product centre, ADLINK. “Leveraging NVIDIA’s GPUs based on the Turing architecture, our customers can now increase their edge processing performance with ruggedised modules that are fit for any environment while remaining inside their SWaP envelope.”