HomeElectronics Startups & Innovators“Power For AI Is A Trillion-Dollar Opportunity” - Preetam Tadeparthy And Vikram...

“Power For AI Is A Trillion-Dollar Opportunity” – Preetam Tadeparthy And Vikram Gakhar, C2i Semiconductor

An Indian startup is rearchitecting power delivery for AI servers, cutting energy losses, enhancing GPU performance, and developing chips poised to shape the next generation of global AI infrastructure. In a conversation with Preetam Tadeparthy and Vikram Gakhar of C2i Semiconductors, Nidhi Agarwal and Saba Aafreen from EFY examine the technology, challenges, and vision driving these innovations.


(L to R) C2i Semiconductors Founders—Vikram Gakhar, Founder & Director, Mixed Signal; Preetam Tadeparthy, Founder, CTO, VP, Engineering; Ram Anant, Founder & C
(L to R) C2i Semiconductors Founders—Vikram Gakhar, Founder & Director, Mixed Signal; Preetam Tadeparthy, Founder, CTO, VP, Engineering; Ram Anant, Founder & CEO; B.S. Dattatreya, Founder & Director, Power

Q. What role does C2i Semiconductors play in powering next-generation AI infrastructure?

A. C2i Semiconductors stands for conversion, control, and intelligence, which define the company’s approach to power electronics. Conversion transforms electrical power between voltage domains, control regulates this process to ensure stability and efficiency, and intelligence enhances power management through embedded capabilities in controllers or AI systems.

Building on this foundation, the company is developing a total power solution for AI infrastructure, particularly hyperscalers and data centres, covering the entire grid-to-core power delivery chain. This includes converting the high-voltage grid input, typically 400 to 800 volts, into tightly regulated power suitable for advanced processors. The company was founded with the ambition of building a product-driven technology firm from India focused on rearchitecting end-to-end power delivery for server platforms while setting new benchmarks in innovation and quality for the global AI computing ecosystem.

Q. What technical challenges has C2i Semiconductors faced in converting high-voltage input to sub-1-volt rails for AI processors?

A. Our first-generation product does not directly convert 800 volts to sub-1-volt rails. The current industry standard, covering about 90 per cent, uses a three-stage conversion from 400 volts to 48 volts, then to 12 or 6 volts, and finally to the GPU core voltage. Our current product, taping out in Q3 2026, focuses on this final stage. For next-generation designs, we are exploring direct 800-volt-to-bus-voltage conversion and evaluating whether a one- or two-stage final conversion is more efficient.

A key challenge is the topology that enables such large conversion ratios (1000:1) in one step and identifying a control approach that balances efficiency with GPU load demand. C2i’s architectural uniqueness lies in its software-defined VR approach.

For low-voltage stages, silicon is sufficient, but for high-voltage conversion, we partner with fabs that provide GaN solutions.

Q. At what level is the 10% efficiency gain measured, and where does it come from?

A. The 10 per cent efficiency gain is measured at the system level and comes from the company’s proprietary IP, with a reduced number of stages and optimisation across both control and power conversion in each stage. The company has filed nine patents, with several more in progress, enabling extremely fast switching and highly optimised power conversion. Efficiency losses in a system typically occur in three areas. First, during voltage conversion from the grid level to the core level, each step incurs losses. Second, intermediate bus routing on the motherboard, carrying hundreds to thousands of amps, introduces resistive losses. Third, GPU operation involves frequent transitions between high-performance and deep-sleep modes, where latency in power management can reduce performance (indirect efficiency).

The company’s IP optimises across all three levels, resulting in a net 10% gain in queries processed per watt.

Q. How would you sum up the advantages of your power management solutions?

A. Our power management architecture offers several differentiators over incumbent solutions. We are developing a software-defined voltage regulator that is highly scalable, operates from the grid to the core, and is processor-agnostic. It can support systems ranging from 100 watts to several kilowatts without compromising performance. Key technical advantages include superior thermal efficiency, higher overall system efficiency, and compact density, reducing real estate requirements.

Additionally, the company’s proprietary IP extends the lifespan of power solutions, effectively doubling the life of server power systems compared to existing components.

These innovations translate into measurable gains, including up to a 10% improvement in GPU performance (tokens processed per watt) and significantly longer server lifecycles.

Q. How do you approach co-design between power delivery and AI compute loads?

A. Co-design involves two layers. For power delivery, we work closely with processor companies and OEMs because every server solution has unique requirements.

For hardware-software partitioning, our die currently runs only C2i Semi’s own algorithms, both in hardware and software. We collaborate with customers to demonstrate the performance advantages of these algorithms.

If a feature meaningfully improves system performance, we enable it, allowing customers to benefit and potentially market it. Internal partitioning between hardware and software is driven by architectural choices and embedded intelligence, refined through ongoing customer engagement.

Q. How did you validate the reliability of your design before committing to silicon?

A. We validate reliability through close collaboration with fabs such as Tower and GlobalFoundries, reviewing designs and data with their process engineers on a fortnightly basis.

Beyond that, C2i’s experience provides an edge. Our team has worked on similar products long enough to anticipate critical gaps that simulations cannot fully capture.

For areas that cannot be validated pre-silicon, we plan accordingly, ensuring budgets and design margins account for variations between silicon and simulation.

Q. What are the main risks between tape-out and first silicon validation?

A. The biggest risk is schedule, especially since we are on a multi-project wafer. Any delay can push back silicon delivery, so we focus on completing all tasks on schedule while leaving time to verify changes.

Another challenge is the inherent uncertainty between simulation and silicon. Beyond the chip itself, factors such as packaging, package-to-motherboard interactions, and passive components introduce variations that are difficult to model accurately.

No one can guarantee 100 per cent performance at this stage, so we build in design and verification buffers to mitigate these risks and ensure the final product meets expectations.

Q. How does a startup like C2i Semiconductors protect its core IP while collaborating with customers or partners?

A. We protect our core technology primarily through patents. We have already filed nine and have several more in the pipeline. For collaborations—whether with universities or partners—any IP developed under a C2i project belongs to the company, with clear agreements governing the use of third-party IP.

For OEMs and other customers, we treat them as clients, sharing differentiated features and performance as needed, while ensuring our core IP remains fully protected.

Q. Who are your target customers and how do you position in India and globally?

A. The company’s primary customers are global data centre operators, AI players, hyperscalers, network companies, and storage providers. We are actively engaging with them, demonstrating our silicon capabilities and receiving positive feedback on our feature set and performance targets. In India, leading customers are evolving into self-sufficient OEMs by building servers and systems in-house, unlocking new pathways for indigenous product technologies.

India is transitioning from deploying AI infrastructure to manufacturing it domestically. As Indian players begin building population-scale AI solutions, they will adopt more system engineering practices, creating semiconductor opportunities.

By first establishing success in global markets, the company aims to expand into the Indian ecosystem. While the government is developing initiatives under ISM 2.0 and ISM 3.0 to support domestic OEM and ODM capabilities, there are currently no dedicated programmes ensuring semiconductor products developed in India are consumed locally.

Q. How does the company approach partnerships with OEMs, AI companies, or startups to scale the business?

A. Partnerships begin the moment the silicon is available, as performance metrics speak for themselves. We present our first-generation silicon to OEMs, ODMs, processor players, and other technology partners, demonstrating current capabilities alongside a clear roadmap for future generations, while also evidencing readiness for volume production.

Generation 1 establishes thermal, performance, and reliability benchmarks, while Generations 2 and 3 demonstrate how these metrics improve, building confidence in long-term collaboration.

Over time, we work closely with fabs, packaging vendors, and system integrators to define new process nodes, package technologies, and thermal solutions.

This multi-generation roadmap and end-to-end approach—rare even among large incumbents—forms the foundation for strategic partnerships and long-term scaling, driven from India by the startup team.

Q. What challenges have you faced in scaling operations, hiring talent, and balancing R&D with commercialisation?

A. India has strong generic talent, but finding engineers with domain expertise in high power AI applications and alignment with the company’s mission has been a major challenge. Competing with large corporations for experienced talent requires patience and careful selection.

A strong founding team enabled the development of the next layer of leadership. Motivating people to balance short-term stability with long-term impact is also critical.

Another challenge is keeping pace with a rapidly evolving AI market while building a multi-year technology roadmap from India. This requires a clear view of market direction, supported by informed judgment.

As India’s ecosystem grows towards ‘Make in India’ AI infrastructure, both talent availability and market access should improve.

Q. What is your vision for a ‘Make in India’ AI ecosystem in the future?

A. Our vision is to enable population-scale AI that is affordable and optimised for India’s needs.

Today, AI infrastructure is expensive, but solutions built in India for India can reduce costs, improve accessibility, and ensure long-term efficiency and reliability.

We expect India to adopt a more distributed and edge-centric AI model, supporting applications such as speech, vision-based, and local-language AI on optimised infrastructure.

As the ecosystem evolves, more system-level design and semiconductor innovation will emerge from India, shifting the focus from ‘assembled in India’ to ‘architected in India.’


Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a Senior Technology Journalist at Electronics For You, specialising in embedded systems, development boards, and IoT cloud solutions. With a Master’s degree in Signal Processing, she combines strong technical knowledge with hands-on industry experience to deliver clear, insightful, and application-focused content. Nidhi began her career in engineering roles, working as a Product Engineer at Makerdemy, where she gained practical exposure to IoT systems, development platforms, and real-world implementation challenges. She has also worked as an IoT intern and robotics developer, building a solid foundation in hardware-software integration and emerging technologies. Before transitioning fully into technology journalism, she spent several years in academia as an Assistant Professor and Lecturer, teaching electronics and related subjects. This background reflects in her writing, which is structured, easy to understand, and highly educational for both students and professionals. At Electronics For You, Nidhi covers a wide range of topics including embedded development, cloud-connected devices, and next-generation electronics platforms. Her work focuses on simplifying complex technologies while maintaining technical accuracy, helping engineers, developers, and learners stay updated in a rapidly evolving ecosystem.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics