Friday, January 9, 2026

Next-Gen Storage Stack Supercharges AI

A new enterprise storage lineup and client-side GPU-assisted acceleration aim to push AI workloads faster from dense cloud clusters to everyday laptops reshaping how organizations deploy and scale AI.

Next-Gen Storage Stack Supercharges AI
storage Stack Supercharges AI

Phison’s latest storage and acceleration technologies arrive at a moment when enterprises are scrambling to match infrastructure with soaring AI compute demand. The company’s expanded portfolio anchored by two PCIe Gen5 enterprise SSDs and an iGPU-based AI acceleration layer targets AI training, inference, analytics and cloud-scale data operations with a focus on predictable performance and low latency.

- Advertisement -

The update centers on two SSD platforms built to handle high-volume, latency-sensitive AI pipelines. The higher-end model is engineered for throughput-heavy environments such as AI training clusters, HPC systems and large analytics engines. It delivers up to 14.5 GB/s reads, 12 GB/s writes and up to 3300K/1050K random IOPS, addressing the bottlenecks common in multi-node training and real-time analytics. Capacities reach 30.72 TB with endurance options up to 3 DWPD.

The key features are:

  • Gen5 SSD throughput up to 14.5 GB/s read and 12 GB/s write
  • Up to 3300K / 1050K random IOPS for AI-intensive workloads
  • Capacities scaling to 30.72 TB (performance) and 15.36 TB (density)
  • aiDAPTIV+ boosts iGPU AI agent performance by up to 25×

A second model targets hyperscale and cloud operators prioritizing density and energy-efficient performance for object storage, content delivery networks and large distributed databases. While using the same Gen5 backbone14.5 GB/s reads and 12 GB/s writes the design optimizes for compact E1.S deployments with capacities up to 15.36 TB.

- Advertisement -

Beyond the data center, the company is pushing AI acceleration deeper into the client stack. Its aiDAPTIV+ technology enables AI agents to run efficiently on integrated GPUs, delivering up to 25× faster performance and slashing inference response times dropping one test case from 73 seconds to 4 seconds. This intends to turn mainstream laptops into viable AI productivity machines for IT teams, developers and students without the cost of dedicated GPUs.

The rollout builds on earlier high-capacity Gen5 deployments, including a 122.88TB E3.L SSD now shipping to OEMs, with capacities up to 245TB slated for the portfolio. The goal: unify controller design, firmware and AI-oriented memory technologies under a single architecture that can scale from early AI experimentation to full-scale enterprise deployment. 

Akanksha Gaur
Akanksha Gaur
Akanksha Sondhi Gaur is a journalist at EFY. She has a German patent and brings a robust blend of 7 years of industrial & academic prowess to the table. Passionate about electronics, she has penned numerous research papers showcasing her expertise and keen insight.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics

×