The GPU servers bring big speed and efficiency gains, set to change how AI, graphics, and data work are done in data centers.

NVIDIA is bringing the RTX PRO 6000 Blackwell Server Edition GPU to the world’s most widely deployed enterprise rack-mounted servers, signaling a major move from CPU-focused infrastructure to GPU-accelerated computing. The new 2U mainstream servers will be available from Cisco, Dell Technologies, HPE, Lenovo, and Supermicro, offering enterprises global access to GPU acceleration for a wide range of workloads.
These servers deliver up to 45× higher performance and 18× greater energy efficiency than CPU-only 2U systems, reducing total cost of ownership while boosting capabilities for AI, analytics, graphics, simulations, and industrial applications. They also serve as the backbone of the NVIDIA AI Data Platform, enabling customizable storage solutions for enterprise AI.
The new models join the RTX PRO Server portfolio introduced in May, now covering 2-, 4-, and 8-GPU rack designs for varied performance needs. At SIGGRAPH, Dell announced upgrades to its AI Data Platform and the new PowerEdge R7725 server with two RTX PRO 6000 GPUs, NVIDIA AI Enterprise software, and NVIDIA networking.
Powered by the Blackwell architecture, RTX PRO Servers feature:
- New processing cores and a faster engine make AI tasks run up to 6× quicker than the previous L40S GPUs.
- Updated RTX graphics tech makes realistic visuals up to 4× faster.
- Can split one GPU into four separate, independent parts for different users or jobs.
- Uses less power for the same work, making it more efficient.
For Physical AI and robotics, these servers run NVIDIA Omniverse and Cosmos world foundation models, accelerating factory simulations, robotics training, and synthetic data generation by up to 4× compared with L40S GPUs. They also support NVIDIA Blueprint for video search, vision-language models, and smart space applications.
For AI agents and reasoning workloads, RTX PRO Servers are certified for NVIDIA AI Enterprise. Using NVFP4 on a single RTX PRO 6000 GPU, the new Llama Nemotron Super model achieves up to 3× better price performance than FP8 on NVIDIA H100 GPUs.
Built on the CUDA-X ecosystem with over 6 million developers and nearly 6,000 supported applications, the Blackwell platform enables enterprises to scale AI and computing workloads across thousands of GPUs, opening new possibilities in data-driven innovation.






