A full AI stack runs on a domestic system, where model, inference engine, and compute come together, showing how workloads execute on locally built infrastructure in India.

Turiyam AI, an artificial intelligence compute solutions company from India, has announced the successful deployment of its inference engine on an indigenous server architecture at the Centre for Development of Advanced Computing (C-DAC), Pune. This milestone marks the execution of an AI software stack developed in India, with both the model and the inference engine integrated within a single domestic compute environment.
As part of the deployment, Turiyam integrated its inference-first compute platform with Rudra 1 and Rudra 2 servers. This integration enables the execution of advanced AI workloads on indigenous server systems. During the validation process, a large language model for the Hindi language, covering 37 dialects, was successfully run on Turiyam’s inference engine within the C-DAC infrastructure environment.
This deployment represents a complete AI execution pipeline built in India. It brings together an Indian-developed large language model, a domestically built inference engine, and an indigenous server architecture, all executed within C-DAC’s high-performance computing environment.
Commenting on this milestone, Shri E Magesh, Director General, C-DAC, said, “C-DAC continues to work closely with industry, academia and research partners to strengthen India’s advanced computing ecosystem. The validation of advanced AI workloads on indigenous computing infrastructure reflects the growing maturity of India’s research and innovation ecosystem. C-DAC is open to enabling platforms that support the development and deployment of next generation technologies.”
Sanchayan Sinha, Cofounder and CEO, Turiyam AI, said, “This milestone proves that India can build and execute across the full AI stack, from model to inference engine and advanced compute platforms. By validating performance within C-DAC’s environment, we are demonstrating that advanced AI workloads can run on domestically engineered systems without compromise.”



