HomeTech ZoneTech of ProcessorsDeep Learning Platform For Smarter AI Inferencing At The Edge

Deep Learning Platform For Smarter AI Inferencing At The Edge

Compact, high performing GPU-enabled deep learning acceleration platform for deploying AI at the edge across industrial applications

ADLINK Technology has launched one of the most compact GPU-enabled deep learning acceleration platforms, the DLAP x86 series, which targets the deployment of deep learning in volume, at the edge where data is generated and actions are taken. It is optimised to deliver AI performance in various industrial applications by accelerating compute-intensive, memory-hungry AI inferencing and learning tasks.

- Advertisement -

The DLAP x86 series features:

  • Heterogeneous architecture for high performance – featuring Intel processors and NVIDIA Turing GPU architecture delivering higher GPU-accelerated computation than others and returning optimized performance per watt and dollar.
  • The DLAP x86 series’ compact size starts at 3.2 litres; it is optimal within mobility devices or instruments where physical space is limited, such as mobile medical imaging equipment.
  • With a rugged design for reliability, the DLAP x86 series can sustain temperatures up to 50 degrees Celsius/240 watts of heat dissipation, strong vibration (up to 2 Grms) and shock protection (up to 30 Grms), for reliability in industrial, manufacturing and healthcare environments.

Delivering an optimal mix of SWaP and AI performance in edge AI applications, the DLAP x86 helps transform operations in healthcare, manufacturing, transportation and other sectors. Examples of use include:

  • Mobile medical imaging equipment: C-arm, endoscopy systems, surgical navigation systems.
  • Manufacturing operations: object recognition, robotic pick and place, quality inspection.
  • Edge AI servers for knowledge transfer: combining pre-trained AI models with local data sets.

“Large multilayered networks? Complex datasets? The DLAP x86 series’ flexibility provides deep learning. Architects can choose the optimal combination of CPU and GPU processors based on the demands of an application’s neural networks and AI inferencing speed, yielding a high performance per dollar,” said Zane Tsai, Director of ADLINK’s Embedded Platforms & Modules Product Center.


Vinay Prabhakar Minj
Vinay Prabhakar Minj
Vinay Prabhakar Minj is a technology writer and science communication specialist with a Master’s degree in Communication of Science and Innovation (Science Communication). He is a prolific contributor to Electronics For You, where he has authored over 1,000 articles covering electronics, semiconductors, embedded systems, IoT, and emerging technologies. With a strong foundation in science communication, Vinay focuses on translating complex engineering concepts into clear, accessible, and application-oriented content. His work spans topics such as sensor technologies, chip design, wireless systems, and next-generation electronics, making advanced innovations easier to understand for engineers, students, and industry professionals. Through his extensive contributions, he has built a reputation for delivering reliable, well-researched, and practical insights that help readers stay updated with the rapidly evolving electronics ecosystem. His writing bridges the gap between technical depth and real-world usability, supporting both learning and decision-making in the field.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics